2 research outputs found
Real-time Scene Segmentation Using a Light Deep Neural Network Architecture for Autonomous Robot Navigation on Construction Sites
Camera-equipped unmanned vehicles (UVs) have received a lot of attention in
data collection for construction monitoring applications. To develop an
autonomous platform, the UV should be able to process multiple modules (e.g.,
context-awareness, control, localization, and mapping) on an embedded platform.
Pixel-wise semantic segmentation provides a UV with the ability to be
contextually aware of its surrounding environment. However, in the case of
mobile robotic systems with limited computing resources, the large size of the
segmentation model and high memory usage requires high computing resources,
which a major challenge for mobile UVs (e.g., a small-scale vehicle with
limited payload and space). To overcome this challenge, this paper presents a
light and efficient deep neural network architecture to run on an embedded
platform in real-time. The proposed model segments navigable space on an image
sequence (i.e., a video stream), which is essential for an autonomous vehicle
that is based on machine vision. The results demonstrate the performance
efficiency of the proposed architecture compared to the existing models and
suggest possible improvements that could make the model even more efficient,
which is necessary for the future development of the autonomous robotics
systems.Comment: The 2019 ASCE International Conference on Computing in Civil
Engineerin
Vision-based Obstacle Removal System for Autonomous Ground Vehicles Using a Robotic Arm
Over the past few years, the use of camera-equipped robotic platforms for
data collection and visually monitoring applications has exponentially grown.
Cluttered construction sites with many objects (e.g., bricks, pipes, etc.) on
the ground are challenging environments for a mobile unmanned ground vehicle
(UGV) to navigate. To address this issue, this study presents a mobile UGV
equipped with a stereo camera and a robotic arm that can remove obstacles along
the UGV's path. To achieve this objective, the surrounding environment is
captured by the stereo camera and obstacles are detected. The obstacle's
relative location to the UGV is sent to the robotic arm module through Robot
Operating System (ROS). Then, the robotic arm picks up and removes the
obstacle. The proposed method will greatly enhance the degree of automation and
the frequency of data collection for construction monitoring. The proposed
system is validated through two case studies. The results successfully
demonstrate the detection and removal of obstacles, serving as one of the
enabling factors for developing an autonomous UGV with various construction
operating applications.Comment: The 2019 ASCE International Conference on Computing in Civil
Engineerin