9,982 research outputs found
DeepSignals: Predicting Intent of Drivers Through Visual Signals
Detecting the intention of drivers is an essential task in self-driving,
necessary to anticipate sudden events like lane changes and stops. Turn signals
and emergency flashers communicate such intentions, providing seconds of
potentially critical reaction time. In this paper, we propose to detect these
signals in video sequences by using a deep neural network that reasons about
both spatial and temporal information. Our experiments on more than a million
frames show high per-frame accuracy in very challenging scenarios.Comment: To be presented at the IEEE International Conference on Robotics and
Automation (ICRA), 201
Challenges in video based object detection in maritime scenario using computer vision
This paper discusses the technical challenges in maritime image processing
and machine vision problems for video streams generated by cameras. Even well
documented problems of horizon detection and registration of frames in a video
are very challenging in maritime scenarios. More advanced problems of
background subtraction and object detection in video streams are very
challenging. Challenges arising from the dynamic nature of the background,
unavailability of static cues, presence of small objects at distant
backgrounds, illumination effects, all contribute to the challenges as
discussed here
Helicopter flights with night-vision goggles: Human factors aspects
Night-vision goggles (NVGs) and, in particular, the advanced, helmet-mounted Aviators Night-Vision-Imaging System (ANVIS) allows helicopter pilots to perform low-level flight at night. It consists of light intensifier tubes which amplify low-intensity ambient illumination (star and moon light) and an optical system which together produce a bright image of the scene. However, these NVGs do not turn night into day, and, while they may often provide significant advantages over unaided night flight, they may also result in visual fatigue, high workload, and safety hazards. These problems reflect both system limitations and human-factors issues. A brief description of the technical characteristics of NVGs and of human night-vision capabilities is followed by a description and analysis of specific perceptual problems which occur with the use of NVGs in flight. Some of the issues addressed include: limitations imposed by a restricted field of view; problems related to binocular rivalry; the consequences of inappropriate focusing of the eye; the effects of ambient illumination levels and of various types of terrain on image quality; difficulties in distance and slope estimation; effects of dazzling; and visual fatigue and superimposed symbology. These issues are described and analyzed in terms of their possible consequences on helicopter pilot performance. The additional influence of individual differences among pilots is emphasized. Thermal imaging systems (forward looking infrared (FLIR)) are described briefly and compared to light intensifier systems (NVGs). Many of the phenomena which are described are not readily understood. More research is required to better understand the human-factors problems created by the use of NVGs and other night-vision aids, to enhance system design, and to improve training methods and simulation techniques
Recommended from our members
Improving the efficiency and accuracy of nocturnal bird Surveys through equipment selection and partial automation
This thesis was submitted for the degree of Engineering Doctorate and awarded by Brunel University.Birds are a key environmental asset and this is recognised through comprehensive legislation and policy ensuring their protection and conservation. Many species are active at night and surveys are required to understand the implications of proposed developments such as towers and reduce possible conflicts with these structures. Night vision devices are commonly used in nocturnal surveys, either to scope an area for bird numbers and activity, or in remotely sensing an area to determine potential risk. This thesis explores some practical and theoretical approaches that can improve the accuracy, confidence and efficiency of nocturnal bird surveillance. As image intensifiers and thermal imagers have operational differences, each device has associated strengths and limitations. Empirical work established that image intensifiers are best used for species identification of birds against the ground or vegetation. Thermal imagers perform best in detection tasks and monitoring bird airspace usage. The typically used approach of viewing bird survey video from remote sensing in its entirety is a slow, inaccurate and inefficient approach. Accuracy can be significantly improved by viewing the survey video at half the playback speed. Motion detection efficiency and accuracy can be greatly improved through the use of adaptive background subtraction and cumulative image differencing. An experienced ornithologist uses bird flight style and wing oscillations to identify bird species. Changes in wing oscillations can be represented in a single inter-frame similarity matrix through area-based differencing. Bird species classification can then be automated using singular value decomposition to reduce the matrices to one-dimensional vectors for training a feed-forward neural network
Beware the Boojum: Caveats and Strengths of Avian Radar
Radar provides a useful and powerful tool to wildlife biologists and ornithologists. However, radar also has the potential for errors on a scale not previously possible. In this paper, we focus on the strengths and limitations of avian surveillance radars that use marine radar front-ends integrated with digital radar processors to provide 360° of coverage. Modern digital radar processors automatically extract target information, including such various target attributes as location, speed, heading, intensity, and radar cross-section (size) as functions of time. Such data can be stored indefinitely, providing a rich resource for ornithologists and wildlife managers. Interpreting these attributes in view of the sensor’s characteristics from which they are generated is the key to correctly deriving and exploiting application-specific information about birds and bats. We also discuss (1) weather radars and air-traffic control surveillance radars that could be used to monitor birds on larger, coarser spatial scales; (2) other nonsurveillance radar configurations, such as vertically scanning radars used for vertical profiling of birds along a particular corridor; and (3) Doppler, single-target tracking radars used for extracting radial velocity and wing-beat frequency information from individual birds for species identification purposes
TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture
Agricultural vehicles such as tractors and harvesters have for decades been able to navigate automatically and more efficiently using commercially available products such as auto-steering and tractor-guidance systems. However, a human operator is still required inside the vehicle to ensure the safety of vehicle and especially surroundings such as humans and animals. To get fully autonomous vehicles certified for farming, computer vision algorithms and sensor technologies must detect obstacles with equivalent or better than human-level performance. Furthermore, detections must run in real-time to allow vehicles to actuate and avoid collision.This thesis proposes a detection system (TractorEYE), a dataset (FieldSAFE), and procedures to fuse information from multiple sensor technologies to improve detection of obstacles and to generate a map. TractorEYE is a multi-sensor detection system for autonomous vehicles in agriculture. The multi-sensor system consists of three hardware synchronized and registered sensors (stereo camera, thermal camera and multi-beam lidar) mounted on/in a ruggedized and water-resistant casing. Algorithms have been developed to run a total of six detection algorithms (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is -- compared to a state-of-the-art object detector Faster R-CNN -- for an agricultural use-case able to detect humans better and at longer ranges (45-90m) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset includes synchronized recordings from a rgb camera, stereo camera, thermal camera, 360-degree camera, lidar and radar. Precise localization and pose is provided using IMU and GPS. Ground truth of static and moving obstacles (humans, mannequin dolls, barrels, buildings, vehicles, and vegetation) are available as an annotated orthophoto and GPS coordinates for moving obstacles. Detection information from multiple detection algorithms and sensors are fused into a map using Inverse Sensor Models and occupancy grid maps. This thesis presented many scientific contribution and state-of-the-art within perception for autonomous tractors; this includes a dataset, sensor platform, detection algorithms and procedures to perform multi-sensor fusion. Furthermore, important engineering contributions to autonomous farming vehicles are presented such as easily applicable, open-source software packages and algorithms that have been demonstrated in an end-to-end real-time detection system. The contributions of this thesis have demonstrated, addressed and solved critical issues to utilize camera-based perception systems that are essential to make autonomous vehicles in agriculture a reality
- …