295 research outputs found
An optimal lifting multiwavelet for rotating machinery fault detection
The vibration signals acquired from rotating machinery are often complex, and fault features are masked by background noise. Feature extraction and denoising are the key for rotating machinery fault detection, and advanced signal processing method is needed to analyze such vibration signals. In this paper, an optimal lifting multiwavelet denoising method is developed for rotating machinery fault detection. Minimum energy entropy is used as the metric optimize the lifting multiwavelet coefficients, and the optimal lifting multiwavelet is constructed to capture the vibration signal characteristics. The improved denoising threshod method is used to remove the background noise. The proposed method is applied to turbine generator and rolling bearing fault detection to verify the effectiveness. The results show that the method is a robust approach to reveal the impulses from background noise, and it performs well for rotating machinery fault detection
Co-interest Person Detection from Multiple Wearable Camera Videos
Wearable cameras, such as Google Glass and Go Pro, enable video data
collection over larger areas and from different views. In this paper, we tackle
a new problem of locating the co-interest person (CIP), i.e., the one who draws
attention from most camera wearers, from temporally synchronized videos taken
by multiple wearable cameras. Our basic idea is to exploit the motion patterns
of people and use them to correlate the persons across different videos,
instead of performing appearance-based matching as in traditional video
co-segmentation/localization. This way, we can identify CIP even if a group of
people with similar appearance are present in the view. More specifically, we
detect a set of persons on each frame as the candidates of the CIP and then
build a Conditional Random Field (CRF) model to select the one with consistent
motion patterns in different videos and high spacial-temporal consistency in
each video. We collect three sets of wearable-camera videos for testing the
proposed algorithm. All the involved people have similar appearances in the
collected videos and the experiments demonstrate the effectiveness of the
proposed algorithm.Comment: ICCV 201
Why Shallow Networks Struggle with Approximating and Learning High Frequency: A Numerical Study
In this work, a comprehensive numerical study involving analysis and
experiments shows why a two-layer neural network has difficulties handling high
frequencies in approximation and learning when machine precision and
computation cost are important factors in real practice. In particular, the
following basic computational issues are investigated: (1) the minimal
numerical error one can achieve given a finite machine precision, (2) the
computation cost to achieve a given accuracy, and (3) stability with respect to
perturbations. The key to the study is the conditioning of the representation
and its learning dynamics. Explicit answers to the above questions with
numerical verifications are presented
Detecting phone-related pedestrian distracted behaviours via a two-branch convolutional neural network
The distracted phone-use behaviours among pedestrians, like Texting, Game Playing and Phone Calls, have caused increasing fatalities and injuries. However, the research of phonerelated distracted behaviour by pedestrians has not been systemically studied. It is desired to improve both the driving and pedestrian safety by automatically discovering the phonerelated pedestrian distracted behaviours. Herein, a new computer vision-based method is proposed to detect the phone-related pedestrian distracted behaviours from a view of intelligent and autonomous driving. Specifically, the first end-to-end deep learning based Two-Branch Convolutional Neural Network (CNN) is designed for this task. Taking one synchronised image pair by two front on-car GoPro cameras as the inputs, the proposed two-branch CNN will extract features for each camera, fuse the extracted features and perform a robust classification. This method can also be easily extended to video-based classification by confidence accumulation and voting. A new benchmark dataset of 448 synchronised video pairs of 53,760 images collected on a vehicle is proposed for this research. The experimental results show that using two synchronised cameras obtained better performance than using one single camera. Finally, the proposed method achieved an overall best classification accuracy of 84.3% on the new benchmark when compared to other methods
Domain Adaptation For Vehicle Detection In Traffic Surveillance Images From Daytime To Nighttime
Vehicle detection in traffic surveillance images is an important approach to obtain vehicle data and rich traffic flow parameters. Recently, deep learning based methods have been widely used in vehicle detection with high accuracy and efficiency. However, deep learning based methods require a large number of manually labeled ground truths (bounding box of each vehicle in each image) to train the Convolutional Neural Networks (CNN). In the modern urban surveillance cameras, there are already many manually labeled ground truths in daytime images for training CNN, while there are little or much less manually labeled ground truths in nighttime images. In this paper, we focus on the research to make maximum usage of labeled daytime images (Source Domain) to help the vehicle detection in unlabeled nighttime images (Target Domain). For this purpose, we propose a new method based on Faster R-CNN with Domain Adaptation (DA) to improve the vehicle detection at nighttime. With the assistance of DA, the domain distribution discrepancy of Source and Target Domains is reduced. We collected a new dataset of 2,200 traffic images (1,200 for daytime and 1,000 for nighttime) of 57,059 vehicles for training and testing CNN. In the experiment, only using the manually labeled ground truths of daytime data, Faster R- CNN obtained 82.84% as F-measure on the nighttime vehicle detection, while the proposed method (Faster R-CNN+DA) achieved 86.39% as F-measure on the nighttime vehicle detection
- …