5 research outputs found
Optical Flow Based Real-time Moving Object Detection in Unconstrained Scenes
Real-time moving object detection in unconstrained scenes is a difficult task
due to dynamic background, changing foreground appearance and limited
computational resource. In this paper, an optical flow based moving object
detection framework is proposed to address this problem. We utilize homography
matrixes to online construct a background model in the form of optical flow.
When judging out moving foregrounds from scenes, a dual-mode judge mechanism is
designed to heighten the system's adaptation to challenging situations. In
experiment part, two evaluation metrics are redefined for more properly
reflecting the performance of methods. We quantitatively and qualitatively
validate the effectiveness and feasibility of our method with videos in various
scene conditions. The experimental results show that our method adapts itself
to different situations and outperforms the state-of-the-art methods,
indicating the advantages of optical flow based methods.Comment: 7 pages, 5 figure
Motion Control on Bionic Eyes: A Comprehensive Review
Biology can provide biomimetic components and new control principles for
robotics. Developing a robot system equipped with bionic eyes is a difficult
but exciting task. Researchers have been studying the control mechanisms of
bionic eyes for many years and considerable models are available. In this
paper, control model and its implementation on robots for bionic eyes are
reviewed, which covers saccade, smooth pursuit, vergence, vestibule-ocular
reflex (VOR), optokinetic reflex (OKR) and eye-head coordination. What is more,
some problems and possible solutions in the field of bionic eyes are discussed
and analyzed. This review paper can be used as a guide for researchers to
identify potential research problems and solutions of the bionic eyes' motion
control
Human Following for Wheeled Robot with Monocular Pan-tilt Camera
Human following on mobile robots has witnessed significant advances due to
its potentials for real-world applications. Currently most human following
systems are equipped with depth sensors to obtain distance information between
human and robot, which suffer from the perception requirements and noises. In
this paper, we design a wheeled mobile robot system with monocular pan-tilt
camera to follow human, which can stay the target in the field of view and keep
following simultaneously. The system consists of fast human detector, real-time
and accurate visual tracker, and unified controller for mobile robot and
pan-tilt camera. In visual tracking algorithm, both Siamese networks and
optical flow information are exploited to locate and regress human
simultaneously. In order in perform following with a monocular camera, the
constraint of human height is introduced to design the controller. In
experiments, human following are conducted and analysed in simulations and a
real robot platform, which demonstrate the effectiveness and robustness of the
overall system
LittleYOLO-SPP: A Delicate Real-Time Vehicle Detection Algorithm
Vehicle detection in real-time is a challenging and important task. The
existing real-time vehicle detection lacks accuracy and speed. Real-time
systems must detect and locate vehicles during criminal activities like theft
of vehicle and road traffic violations with high accuracy. Detection of
vehicles in complex scenes with occlusion is also extremely difficult. In this
study, a lightweight model of deep neural network LittleYOLO-SPP based on the
YOLOv3-tiny network is proposed to detect vehicles effectively in real-time.
The YOLOv3-tiny object detection network is improved by modifying its feature
extraction network to increase the speed and accuracy of vehicle detection. The
proposed network incorporated Spatial pyramid pooling into the network, which
consists of different scales of pooling layers for concatenation of features to
enhance network learning capability. The Mean square error (MSE) and
Generalized IoU (GIoU) loss function for bounding box regression is used to
increase the performance of the network. The network training includes
vehicle-based classes from PASCAL VOC 2007,2012 and MS COCO 2014 datasets such
as car, bus, and truck. LittleYOLO-SPP network detects the vehicle in real-time
with high accuracy regardless of video frame and weather conditions. The
improved network achieves a higher mAP of 77.44% on PASCAL VOC and 52.95% mAP
on MS COCO datasets.Comment: 18 pages, 8 Figures, 7 Table
High Performance Visual Object Tracking with Unified Convolutional Networks
Convolutional neural networks (CNN) based tracking approaches have shown
favorable performance in recent benchmarks. Nonetheless, the chosen CNN
features are always pre-trained in different tasks and individual components in
tracking systems are learned separately, thus the achieved tracking performance
may be suboptimal. Besides, most of these trackers are not designed towards
real-time applications because of their time-consuming feature extraction and
complex optimization details. In this paper, we propose an end-to-end framework
to learn the convolutional features and perform the tracking process
simultaneously, namely, a unified convolutional tracker (UCT). Specifically,
the UCT treats feature extractor and tracking process both as convolution
operation and trains them jointly, which enables learned CNN features are
tightly coupled with tracking process. During online tracking, an efficient
model updating method is proposed by introducing peak-versus-noise ratio (PNR)
criterion, and scale changes are handled efficiently by incorporating a scale
branch into network. Experiments are performed on four challenging tracking
datasets: OTB2013, OTB2015, VOT2015 and VOT2016. Our method achieves leading
performance on these benchmarks while maintaining beyond real-time speed.Comment: Extended version of [arXiv:1711.04661] our UCT tracker in ICCV
VOT201