4,107 research outputs found
Face tracking using a hyperbolic catadioptric omnidirectional system
In the first part of this paper, we present a brief review on catadioptric omnidirectional
systems. The special case of the hyperbolic omnidirectional system is analysed in depth.
The literature shows that a hyperboloidal mirror has two clear advantages over alternative
geometries. Firstly, a hyperboloidal mirror has a single projection centre [1]. Secondly, the
image resolution is uniformly distributed along the mirror’s radius [2].
In the second part of this paper we show empirical results for the detection and tracking
of faces from the omnidirectional images using Viola-Jones method. Both panoramic and
perspective projections, extracted from the omnidirectional image, were used for that purpose.
The omnidirectional image size was 480x480 pixels, in greyscale. The tracking method used
regions of interest (ROIs) set as the result of the detections of faces from a panoramic projection
of the image. In order to avoid losing or duplicating detections, the panoramic projection was
extended horizontally. Duplications were eliminated based on the ROIs established by previous
detections. After a confirmed detection, faces were tracked from perspective projections (which
are called virtual cameras), each one associated with a particular face. The zoom, pan and tilt
of each virtual camera was determined by the ROIs previously computed on the panoramic
image.
The results show that, when using a careful combination of the two projections, good frame
rates can be achieved in the task of tracking faces reliably
Application of augmented reality and robotic technology in broadcasting: A survey
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
A vision system for mobile maritime surveillance platforms
Mobile surveillance systems play an important role to minimise security and safety threats in high-risk or hazardous environments. Providing a mobile marine surveillance platform with situational awareness of its environment is important for mission success. An essential part of situational awareness is the ability to detect and subsequently track potential target objects.Typically, the exact type of target objects is unknown, hence detection is addressed as a problem of finding parts of an image that stand out in relation to their surrounding regions or are atypical to the domain. Contrary to existing saliency methods, this thesis proposes the use of a domain specific visual attention approach for detecting potential regions of interest in maritime imagery. For this, low-level features that are indicative of maritime targets are identified. These features are then evaluated with respect to their local, regional, and global significance. Together with a domain specific background segmentation technique, the features are combined in a Bayesian classifier to direct visual attention to potential target objects.The maritime environment introduces challenges to the camera system: gusts, wind, swell, or waves can cause the platform to move drastically and unpredictably. Pan-tilt-zoom cameras that are often utilised for surveillance tasks can adjusting their orientation to provide a stable view onto the target. However, in rough maritime environments this requires high-speed and precise inputs. In contrast, omnidirectional cameras provide a full spherical view, which allows the acquisition and tracking of multiple targets at the same time. However, the target itself only occupies a small fraction of the overall view. This thesis proposes a novel, target-centric approach for image stabilisation. A virtual camera is extracted from the omnidirectional view for each target and is adjusted based on the measurements of an inertial measurement unit and an image feature tracker. The combination of these two techniques in a probabilistic framework allows for stabilisation of rotational and translational ego-motion. Furthermore, it has the specific advantage of being robust to loosely calibrated and synchronised hardware since the fusion of tracking and stabilisation means that tracking uncertainty can be used to compensate for errors in calibration and synchronisation. This then completely eliminates the need for tedious calibration phases and the adverse effects of assembly slippage over time.Finally, this thesis combines the visual attention and omnidirectional stabilisation frameworks and proposes a multi view tracking system that is capable of detecting potential target objects in the maritime domain. Although the visual attention framework performed well on the benchmark datasets, the evaluation on real-world maritime imagery produced a high number of false positives. An investigation reveals that the problem is that benchmark data sets are unconsciously being influenced by human shot selection, which greatly simplifies the problem of visual attention. Despite the number of false positives, the tracking approach itself is robust even if a high number of false positives are tracked
Improving Omnidirectional Camera-Based Robot Localization Through Self-Supervised Learning
Autonomous agents in any environment require accurate and reliable position and motion estimation to complete their required tasks. Many different sensor modalities have been utilized for this task such as GPS, ultra-wide band, visual simultaneous localization and mapping (SLAM), and light detection and ranging (LiDAR) SLAM. Many of the traditional positioning systems do not take advantage of the recent advances in the machine learning field. In this work, an omnidirectional camera position estimation system relying primarily on a learned model is presented. The positioning system benefits from the wide field of view provided by an omnidirectional camera. Recent developments in the self-supervised learning field for generating useful features from unlabeled data are also assessed. A novel radial patch pretext task for omnidirectional images is presented in this work. The resulting implementation will be a robot localization and tracking algorithm that can be adapted to a variety of environments such as warehouses and college campuses. Further experiments with additional types of sensors including 3D LiDAR, 60 GHz wireless, and Ultra-Wideband localization systems utilizing machine learning are also explored. A fused learned localization model utilizing multiple sensor modalities is evaluated in comparison to individual sensor models
FieldSAFE: Dataset for Obstacle Detection in Agriculture
In this paper, we present a novel multi-modal dataset for obstacle detection
in agriculture. The dataset comprises approximately 2 hours of raw sensor data
from a tractor-mounted sensor system in a grass mowing scenario in Denmark,
October 2016. Sensing modalities include stereo camera, thermal camera, web
camera, 360-degree camera, lidar, and radar, while precise localization is
available from fused IMU and GNSS. Both static and moving obstacles are present
including humans, mannequin dolls, rocks, barrels, buildings, vehicles, and
vegetation. All obstacles have ground truth object labels and geographic
coordinates.Comment: Submitted to special issue of MDPI Sensors: Sensors in Agricultur
- …