1,358 research outputs found
DxNAT - Deep Neural Networks for Explaining Non-Recurring Traffic Congestion
Non-recurring traffic congestion is caused by temporary disruptions, such as
accidents, sports games, adverse weather, etc. We use data related to real-time
traffic speed, jam factors (a traffic congestion indicator), and events
collected over a year from Nashville, TN to train a multi-layered deep neural
network. The traffic dataset contains over 900 million data records. The
network is thereafter used to classify the real-time data and identify
anomalous operations. Compared with traditional approaches of using statistical
or machine learning techniques, our model reaches an accuracy of 98.73 percent
when identifying traffic congestion caused by football games. Our approach
first encodes the traffic across a region as a scaled image. After that the
image data from different timestamps is fused with event- and time-related
data. Then a crossover operator is used as a data augmentation method to
generate training datasets with more balanced classes. Finally, we use the
receiver operating characteristic (ROC) analysis to tune the sensitivity of the
classifier. We present the analysis of the training time and the inference time
separately
Autonomous learning for face recognition in the wild via ambient wireless cues
Facial recognition is a key enabling component for emerging Internet of Things (IoT) services such as smart homes or responsive offices. Through the use of deep neural networks, facial recognition has achieved excellent performance. However, this is only possibly when trained with hundreds of images of each user in different viewing and lighting conditions. Clearly, this level of effort in enrolment and labelling is impossible for wide-spread deployment and adoption. Inspired by the fact that most people carry smart wireless devices with them, e.g. smartphones, we propose to use this wireless identifier as a supervisory label. This allows us to curate a dataset of facial images that are unique to a certain domain e.g. a set of people in a particular office. This custom corpus can then be used to finetune existing pre-trained models e.g. FaceNet. However, due to the vagaries of wireless propagation in buildings, the supervisory labels are noisy and weak. We propose a novel technique, AutoTune, which learns and refines the association between a face and wireless identifier over time, by increasing the inter-cluster separation and minimizing the intra-cluster distance. Through extensive experiments with multiple users on two sites, we demonstrate the ability of AutoTune to design an environment-specific, continually evolving facial recognition system with entirely no user effort
Convolutional neural network for breathing phase detection in lung sounds
We applied deep learning to create an algorithm for breathing phase detection
in lung sound recordings, and we compared the breathing phases detected by the
algorithm and manually annotated by two experienced lung sound researchers. Our
algorithm uses a convolutional neural network with spectrograms as the
features, removing the need to specify features explicitly. We trained and
evaluated the algorithm using three subsets that are larger than previously
seen in the literature. We evaluated the performance of the method using two
methods. First, discrete count of agreed breathing phases (using 50% overlap
between a pair of boxes), shows a mean agreement with lung sound experts of 97%
for inspiration and 87% for expiration. Second, the fraction of time of
agreement (in seconds) gives higher pseudo-kappa values for inspiration
(0.73-0.88) than expiration (0.63-0.84), showing an average sensitivity of 97%
and an average specificity of 84%. With both evaluation methods, the agreement
between the annotators and the algorithm shows human level performance for the
algorithm. The developed algorithm is valid for detecting breathing phases in
lung sound recordings
- …