2,437 research outputs found

    Detection and estimation of moving obstacles for a UAV

    Get PDF
    In recent years, research interest in Unmanned Aerial Vehicles (UAVs) has been grown rapidly because of their potential use for a wide range of applications. In this paper, we proposed a vision-based detection and position/velocity estimation of moving obstacle for a UAV. The knowledge of a moving obstacle's state, i.e., position, velocity, is essential to achieve better performance for an intelligent UAV system specially in autonomous navigation and landing tasks. The novelties are: (1) the design and implementation of a localization method using sensor fusion methodology which fuses Inertial Measurement Unit (IMU) signals and Pozyx signals; (2) The development of detection and estimation of moving obstacles method based on on-board vision system. Experimental results validate the effectiveness of the proposed approach. (C) 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved

    Measuring traffic flow and lane changing from semi-automatic video processing

    Get PDF
    Comprehensive databases are needed in order to extend our knowledge on the behavior of vehicular traffic. Nevertheless data coming from common traffic detectors is incomplete. Detectors only provide vehicle count, detector occupancy and speed at discrete locations. To enrich these databases additional measurements from other data sources, like video recordings, are used. Extracting data from videos by actually watching the entire length of the recordings and manually counting is extremely time-consuming. The alternative is to set up an automatic video detection system. This is also costly in terms of money and time, and generally does not pay off for sporadic usage on a pilot test. An adaptation of the semi-automatic video processing methodology proposed by Patire (2010) is presented here. It makes possible to count flow and lane changes 90% faster than actually counting them by looking at the video. The method consists in selecting some specific lined pixels in the video, and converting them into a set of space – time images. The manual time is only spent in counting from these images. The method is adaptive, in the sense that the counting is always done at the maximum speed, not constrained by the video playback speed. This allows going faster when there are a few counts and slower when a lot of counts happen. This methodology has been used for measuring off-ramp flows and lane changing at several locations in the B-23 freeway (Soriguera & Sala, 2014). Results show that, as long as the video recordings fulfill some minimum requirements in framing and quality, the method is easy to use, fast and reliable. This method is intended for research purposes, when some hours of video recording have to be analyzed, not for long term use in a Traffic Management Center.Postprint (published version

    Road User Detection in Videos

    Get PDF
    Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not

    Road User Detection in Videos

    Full text link
    Successive frames of a video are highly redundant, and the most popular object detection methods do not take advantage of this fact. Using multiple consecutive frames can improve detection of small objects or difficult examples and can improve speed and detection consistency in a video sequence, for instance by interpolating features between frames. In this work, a novel approach is introduced to perform online video object detection using two consecutive frames of video sequences involving road users. Two new models, RetinaNet-Double and RetinaNet-Flow, are proposed, based respectively on the concatenation of a target frame with a preceding frame, and the concatenation of the optical flow with the target frame. The models are trained and evaluated on three public datasets. Experiments show that using a preceding frame improves performance over single frame detectors, but using explicit optical flow usually does not

    Are You in the Line? RSSI-based Queue Detection in Crowds

    Full text link
    Crowd behaviour analytics focuses on behavioural characteristics of groups of people instead of individuals' activities. This work considers human queuing behaviour which is a specific crowd behavior of groups. We design a plug-and-play system solution to the queue detection problem based on Wi-Fi/Bluetooth Low Energy (BLE) received signal strength indicators (RSSIs) captured by multiple signal sniffers. The goal of this work is to determine if a device is in the queue based on only RSSIs. The key idea is to extract features not only from individual device's data but also mobility similarity between data from multiple devices and mobility correlation observed by multiple sniffers. Thus, we propose single-device feature extraction, cross-device feature extraction, and cross-sniffer feature extraction for model training and classification. We systematically conduct experiments with simulated queue movements to study the detection accuracy. Finally, we compare our signal-based approach against camera-based face detection approach in a real-world social event with a real human queue. The experimental results indicate that our approach can reach minimum accuracy of 77% and it significantly outperforms the camera-based face detection because people block each other's visibility whereas wireless signals can be detected without blocking.Comment: This work has been partially funded by the European Union's Horizon 2020 research and innovation programme within the project "Worldwide Interoperability for SEmantics IoT" under grant agreement Number 72315
    • …
    corecore