2,897 research outputs found

    A Master-Slave Approach for Object Detection and Matching with Fixed and Mobile Cameras

    Get PDF
    Typical object detection algorithms on mobile cameras suffer from the lack of a-priori knowledge on the object to be detected. The variability in the shape, pose, color distribution, and behavior affect the robustness of the detection process. In general, such variability is addressed by using a large training data. However, only objects present in the training data can be detected. This paper introduces a vision-based system to address such problem. A master-slave approach is presented where a mobile camera (the slave) can match any object detected by a fixed camera (the master). Features extracted by the master camera are used to detect the object of interest in the slave camera without the use of any training data. A single observation is enough regardless of the changes in illumination, viewpoint, color distribution and image quality. A coarse to fine description of the object is presented built upon image statistics robust to partial occlusions. Qualitative and quantitative results are presented in an indoor and an outdoor urban scene

    A Master-Slave Approach to Detect and Match Objects Across Several Uncalibrated Moving Cameras

    Get PDF
    Most multi-camera systems assume a well structured environment to detect and match objects across cameras. Cameras need to be fixed and calibrated. In this work, a novel system is presented to detect and match any objects in a network of uncalibrated fixed and mobile cameras. A master-slave system is presented. Objects are detected with the mobile cameras (the slaves) given only their observations from the fixed cameras (the masters). No training stage and data are used. Detected objects are correctly matched across cameras leading to a better understanding of the scene. A cascade of dense region descriptors is proposed to describe any object of interest. Various region descriptors are studied such as color histogram, histogram of oriented gradients, Haar-wavelet responses, and covariance matrices of various features. The proposed approach outperforms existing work such as scale invariant feature transform (SIFT), or the speeded up robust features (SURF). Moreover, a sparse scan of the image plane is proposed to reduce the search space of the detection and matching process, approaching nearly real-time performance. The approach is robust to changes in illuminations, viewpoints, color distributions and image quality. Partial occlusions are also handled

    Autonomous Multicamera Tracking on Embedded Smart Cameras

    Get PDF
    There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus

    FPGA-based module for SURF extraction

    Get PDF
    We present a complete hardware and software solution of an FPGA-based computer vision embedded module capable of carrying out SURF image features extraction algorithm. Aside from image analysis, the module embeds a Linux distribution that allows to run programs specifically tailored for particular applications. The module is based on a Virtex-5 FXT FPGA which features powerful configurable logic and an embedded PowerPC processor. We describe the module hardware as well as the custom FPGA image processing cores that implement the algorithm's most computationally expensive process, the interest point detection. The module's overall performance is evaluated and compared to CPU and GPU based solutions. Results show that the embedded module achieves comparable disctinctiveness to the SURF software implementation running in a standard CPU while being faster and consuming significantly less power and space. Thus, it allows to use the SURF algorithm in applications with power and spatial constraints, such as autonomous navigation of small mobile robots

    Development of Multi-Robotic Arm System for Sorting System Using Computer Vision

    Get PDF
    This paper develops a multi-robotic arm system and a stereo vision system to sort objects in the right position according to size and shape attributes. The robotic arm system consists of one master and three slave robots associated with three conveyor belts. Each robotic arm is controlled by a robot controller based on a microcontroller. A master controller is used for the vision system and communicating with slave robotic arms using the Modbus RTU protocol through an RS485 serial interface. The stereo vision system is built to determine the 3D coordinates of the object. Instead of rebuilding the entire disparity map, which is computationally expensive, the centroids of the objects in the two images are calculated to determine the depth value. After that, we can calculate the 3D coordinates of the object by using the formula of the pinhole camera model. Objects are picked up and placed on a conveyor branch according to their shape. The conveyor transports the object to the location of the slave robot. Based on the size attribute that the slave robot receives from the master, the object is picked and placed in the right position. Experiment results reveal the effectiveness of the system. The system can be used in industrial processes to reduce the required time and improve the performance of the production line

    Design and implementation of a domestic disinfection robot based on 2D lidar

    Get PDF
    In the battle against the Covid-19, the demand for disinfection robots in China and other countries has increased rapidly. Manual disinfection is time-consuming, laborious, and has safety hazards. For large public areas, the deployment of human resources and the effectiveness of disinfection face significant challenges. Using robots for disinfection therefore becomes an ideal choice. At present, most disinfection robots on the market use ultraviolet or disinfectant to disinfect, or both. They are mostly put into service in hospitals, airports, hotels, shopping malls, office buildings, or other places with daily high foot traffic. These robots are often built-in with automatic navigation and intelligent recognition, ensuring day-to-day operations. However, they usually are expensive and need regular maintenance. The sweeping robots and window-cleaning robots have been put into massive use, but the domestic disinfection robots have not gained much attention. The health and safety of a family are also critical in epidemic prevention. This thesis proposes a low-cost, 2D lidar-based domestic disinfection robot and implements it. The robot possesses dry fog disinfection, ultraviolet disinfection, and air cleaning. The thesis is mainly engaged in the following work: The design and implementation of the control board of the robot chassis are elaborated in this thesis. The control board uses STM32F103ZET6 as the MCU. Infrared sensors are used in the robot to prevent from falling over and walk along the wall. The Ultrasonic sensor is installed in the front of the chassis to detect and avoid the path's obstacles. Photoelectric switches are used to record the information when the potential collisions happen in the early phase of mapping. The disinfection robot adopts a centrifugal fan and HEPA filter for air purification. The ceramic atomizer is used to break up the disinfectant's molecular structure to produce the dry fog. The UV germicidal lamp is installed at the bottom of the chassis to disinfect the ground. The robot uses an air pollution sensor to estimate the air quality. Motors are used to drive the chassis to move. The lidar transmits its data to the navigation board directly through the wires and the edge-board contact on the control board. The control board also manages the atmosphere LEDs, horn, press-buttons, battery, LDC, and temperature-humidity sensor. It exchanges data with and executes the command from the navigation board and manages all kinds of peripheral devices. Thus, it is the administrative unit of the disinfection robot. Moreover, the robot is designed in a way that reduces costs while ensuring quality. The control board’s embedded software is realized and analyzed in the thesis. The communication protocol that links the control board and the navigation board is implemented in software. Standard commands, specific commands, error handling, and the data packet format are detailed and processed in software. The software effectively drives and manages the peripheral devices. SLAMWARE CORE is used as the navigation board to complete the system design. System tests like disinfecting, mapping, navigating, and anti-falling were performed to polish and adjust the structure and functionalities of the robot. Raspberry Pi is also used with the control board to explore 2D Simultaneous Localization and Mapping (SLAM) algorithms, such as Hector, Karto, and Cartographer, in Robot Operating System (ROS) for the robot’s further development. The thesis is written from the perspective of engineering practice and proposes a feasible design for a domestic disinfection robot. Hardware, embedded software, and system tests are covered in the thesis

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Vision-Guided Mobile Robot Navigation

    Get PDF
    This report discusses the use of vision feedback for autonomous navigation by a mobile robot in indoor environments. In particular, we have discussed in detail the issues of camera calibration and how binocular and monocular vision may be utilized for self-location by the robot. A noteworthy feature of monocular vision is that the camera image is compared with a CAD model of the interior of the hallways using the PSEIKI reasoning system; this reasoning system allows the comparison to take place at different levels of geometric detail

    Hybrid Focal Stereo Networks for Pattern Analysis in Homogeneous Scenes

    Full text link
    In this paper we address the problem of multiple camera calibration in the presence of a homogeneous scene, and without the possibility of employing calibration object based methods. The proposed solution exploits salient features present in a larger field of view, but instead of employing active vision we replace the cameras with stereo rigs featuring a long focal analysis camera, as well as a short focal registration camera. Thus, we are able to propose an accurate solution which does not require intrinsic variation models as in the case of zooming cameras. Moreover, the availability of the two views simultaneously in each rig allows for pose re-estimation between rigs as often as necessary. The algorithm has been successfully validated in an indoor setting, as well as on a difficult scene featuring a highly dense pilgrim crowd in Makkah.Comment: 13 pages, 6 figures, submitted to Machine Vision and Application

    Fog computing enabled cost-effective distributed summarization of surveillance videos for smart cities

    Full text link
    [EN] Fog computing is emerging an attractive paradigm for both academics and industry alike. Fog computing holds potential for new breeds of services and user experience. However, Fog computing is still nascent and requires strong groundwork to adopt as practically feasible, cost-effective, efficient and easily deployable alternate to currently ubiquitous cloud. Fog computing promises to introduce cloud-like services on local network while reducing the cost. In this paper, we present a novel resource efficient framework for distributed video summarization over a multi-region fog computing paradigm. The nodes of the Fog network is based on resource constrained device Raspberry Pi. Surveillance videos are distributed on different nodes and a summary is generated over the Fog network, which is periodically pushed to the cloud to reduce bandwidth consumption. Different realistic workload in the form of a surveillance videos are used to evaluate the proposed system. Experimental results suggest that even by using an extremely limited resource, single board computer, the proposed framework has very little overhead with good scalability over off-the-shelf costly cloud solutions, validating its effectiveness for IoT-assisted smart cities. (C) 2018 Elsevier Inc. All rights reserved.Nasir, M.; Muhammad, K.; Lloret, J.; Sangaiah, AK.; Sajjad, M. (2019). Fog computing enabled cost-effective distributed summarization of surveillance videos for smart cities. Journal of Parallel and Distributed Computing. 126:161-170. https://doi.org/10.1016/j.jpdc.2018.11.004S16117012
    corecore