1,118 research outputs found
Obstacle detection and avoidance on sidewalks
We present part of a vision system for blind and visually impaired people. It detects obstacles on sidewalks and provides guidance to avoid them. Obstacles are trees, light poles, trash cans, holes, branches, stones and other objects at a distance of 3 to 5 meters from the camera position. The system first detects the sidewalk borders,
using edge information in combination with a tracking mask, to obtain straight lines with their slopes and the vanishing point. Once the borders are found, a rectangular window is defined within which two obstacle
detection methods are applied. The first determines the variation of the maxima and minima of the gray levels of the pixels. The second uses the binary edge image and searches in the vertical and horizontal histograms for discrepancies of the number of edge points. Together, these methods allow to detect possible obstacles with
their position and size, such that the user can be alerted and informed about the best way to avoid them. The system works in realtime and complements normal navigation with the cane
Real-time Spatial Detection and Tracking of Resources in a Construction Environment
Construction accidents with heavy equipment and bad decision making can be based on poor knowledge of the site environment and in both cases may lead to work interruptions and costly delays. Supporting the construction environment with real-time generated three-dimensional (3D) models can help preventing accidents as well as support management by modeling infrastructure assets in 3D. Such models can be integrated in the path planning of construction equipment operations for obstacle avoidance or in a 4D model that simulates construction processes. Detecting and guiding resources, such as personnel, machines and materials in and to the right place on time requires methods and technologies supplying information in real-time. This paper presents research in real-time 3D laser scanning and modeling using high range frame update rate scanning technology. Existing and emerging sensors and techniques in three-dimensional modeling are explained. The presented research successfully developed computational models and algorithms for the real-time detection, tracking, and three-dimensional modeling of static and dynamic construction resources, such as workforce, machines, equipment, and materials based on a 3D video range camera. In particular, the proposed algorithm for rapidly modeling three-dimensional scenes is explained. Laboratory and outdoor field experiments that were conducted to validate the algorithm’s performance and results are discussed
MultiNet: Multi-Modal Multi-Task Learning for Autonomous Driving
Autonomous driving requires operation in different behavioral modes ranging
from lane following and intersection crossing to turning and stopping. However,
most existing deep learning approaches to autonomous driving do not consider
the behavioral mode in the training strategy. This paper describes a technique
for learning multiple distinct behavioral modes in a single deep neural network
through the use of multi-modal multi-task learning. We study the effectiveness
of this approach, denoted MultiNet, using self-driving model cars for driving
in unstructured environments such as sidewalks and unpaved roads. Using labeled
data from over one hundred hours of driving our fleet of 1/10th scale model
cars, we trained different neural networks to predict the steering angle and
driving speed of the vehicle in different behavioral modes. We show that in
each case, MultiNet networks outperform networks trained on individual modes
while using a fraction of the total number of parameters.Comment: Published in IEEE WACV 201
An Experimental Study on Pitch Compensation in Pedestrian-Protection Systems for Collision Avoidance and Mitigation
This paper describes an improved stereovision system for the anticipated detection of car-to-pedestrian accidents. An improvement of the previous versions of the pedestrian-detection system is achieved by compensation of the camera's pitch angle, since it results in higher accuracy in the location of the ground plane and more accurate depth measurements. The system has been mounted on two different prototype cars, and several real collision-avoidance and collision-mitigation experiments have been carried out in private circuits using actors and dummies, which represents one of the main contributions of this paper. Collision avoidance is carried out by means of deceleration strategies whenever the accident is avoidable. Likewise, collision mitigation is accomplished by triggering an active hood system
The SmartVision local navigation aid for blind and visually impaired persons
The SmartVision prototype is a small, cheap and easily wearable navigation aid for blind and visually impaired persons. Its functionality addresses global navigation for guiding the user to some destiny, and local navigation for negotiating paths, sidewalks and corridors, with avoidance of static as well as moving obstacles. Local navigation applies to both in- and outdoor situations. In this article we focus on local navigation: the detection of path borders and obstacles in front of the user and just beyond the reach of the white cane, such that the user can be assisted in centering on the path and alerted to looming hazards. Using a stereo camera worn at chest height, a portable computer in a shoulder-strapped pouch or pocket and only one earphone or small speaker, the system is
inconspicuous, it is no hindrence while walking with the cane, and it does not block normal surround sounds. The vision algorithms are optimised such that the system can work at a few frames per second
Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions
To plan safe trajectories in urban environments, autonomous vehicles must be
able to quickly assess the future intentions of dynamic agents. Pedestrians are
particularly challenging to model, as their motion patterns are often uncertain
and/or unknown a priori. This paper presents a novel changepoint detection and
clustering algorithm that, when coupled with offline unsupervised learning of a
Gaussian process mixture model (DPGP), enables quick detection of changes in
intent and online learning of motion patterns not seen in prior training data.
The resulting long-term movement predictions demonstrate improved accuracy
relative to offline learning alone, in terms of both intent and trajectory
prediction. By embedding these predictions within a chance-constrained motion
planner, trajectories which are probabilistically safe to pedestrian motions
can be identified in real-time. Hardware experiments demonstrate that this
approach can accurately predict pedestrian motion patterns from onboard
sensor/perception data and facilitate robust navigation within a dynamic
environment.Comment: Submitted to 2014 International Workshop on the Algorithmic
Foundations of Robotic
Deep Learning Obstacle Detection and Avoidance for Powered Wheelchair
Depth sensors like RGB-D cameras, LiDARs and laser scanners are widely investigated in research for Smart Wheelchair (SW) to carry out navigation, localization and ob-stacle detection and avoidance tasks. These sensors are costly compared to monocular camera sensor. A single off-the-shelf camera can be an economically efficient sensor to achieve obstacle detection and avoidance. We present in this paper a single camera based obstacle detection and avoidance method without using any 3D information. It is a novel vision-only system for wheelchair obstacle detection and avoidance that uses a Raspberry Pi along with Raspberry Pi camera. The obstacles are detected using a deep learning model built on MobileNetV2 SSD. The model is retrained using a dedicated dataset that was built for this purpose. Bounding boxes are used to mark detected obstacles; and feed them as features to the image space obstacle avoidance module. Figure 1 depicts internal view of what does the system see and an abstract description of our system's functionality. © 2022 IEEE
Dynamic Path Planning and Replanning for Mobile Robots using RRT*
It is necessary for a mobile robot to be able to efficiently plan a path from
its starting, or current, location to a desired goal location. This is a
trivial task when the environment is static. However, the operational
environment of the robot is rarely static, and it often has many moving
obstacles. The robot may encounter one, or many, of these unknown and
unpredictable moving obstacles. The robot will need to decide how to proceed
when one of these obstacles is obstructing it's path. A method of dynamic
replanning using RRT* is presented. The robot will modify it's current plan
when an unknown random moving obstacle obstructs the path. Various experimental
results show the effectiveness of the proposed method
- …