138,175 research outputs found
Three-dimensional face recognition: An Eigensurface approach
We evaluate a new approach to face recognition using a variety of surface representations of three-dimensional facial structure. Applying principal component analysis (PCA), we show that high levels of recognition accuracy can be achieved on a large database of 3D face models, captured under conditions that present typical difficulties to more conventional two-dimensional approaches. Applying a ran-c of image processing, techniques we identify the most effective surface representation for use in such application areas as security surveillance, data compression and archive searching
Outlier detection in large high-dimensional data and its application in stock market surveillance
University of Technology, Sydney. Faculty of Engineering and Information Technology.Outlier detection techniques play an important role in stock market surveillance that involves analysis of large volume of high-dimensional trading data. However, outlier detection in large high-dimensional data is very challenging and is not well addressed by existing techniques. Firstly, it is difficult to select useful and relevant features from high-dimensional data. Secondly, large high-dimensional data need more efficient algorithms.
To attack the above issues brought by large high-dimensional data, this thesis presents two outlier detection models and one subspace clustering model.
Firstly, an outlier mining model is proposed to detect the outliers from multiple complex stock market data. In order to improve the efficiency of outlier detection, a financial model is used to select the features to construct multiple datasets. This model is able to improve the precision of outlier mining on individual measurements. The experiments on real-world stock market data show that the proposed model is effective and outperforms traditional technologies.
Secondly, in order to find relevant features automatically, an agent-based algorithm is proposed to discover subspace clusters in high dimensional data. Each data object is represented by an agent, and the agents move from one local environment to another to find optimal clusters in subspaces. Heuristic rules and objective functions are defined to guide the movements of agents, so that similar agents (data objects) go to one group. The experimental results show that our proposed agent-based subspace clustering algorithm performs better than existing subspace clustering methods on both F1 measure and Entropy. The running time of our algorithm is scalable with the size and dimensionality of data. Furthermore, an application of our technique to stock market surveillance demonstrates its effectiveness in real world applications.
Finally, we propose a reference-based outlier detection model by agent-based subspace clustering. At first, agent-based subspace clustering is utilized to generate clusters in subspaces. After that, the centers of clusters, together with the corresponding subspaces, are used as references, and a reference-based model is employed to find outliers in relevant subspaces. The experimental results on real-world datasets prove that the proposed model is able to effectively and efficiently identify outliers in subspaces.
In summary, this thesis research on outlier detection techniques on high-dimensional data and its application in stock market surveillance. The proposed models are novel and effective. They have shown their potentials in real business
Editorial: Introduction to the Special Issue on Deep Learning for High-Dimensional Sensing
The papers in this special section focus on deep learning for high-dimensional sensing. People live in a high-dimensional world and sensing is the first step to perceive and understand the environment for both human beings and machines. Therefore, high-dimensional sensing (HDS) plays a pivotal role in many fields such as robotics, signal processing, computer vision and surveillance. The recent explosive growth of artificial intelligence has provided new opportunities and tools for HDS, especially for machine vision. In many emerging real applications such as advanced driver assistance systems/autonomous driving systems, large-scale, high-dimensional and diverse types of data need to be captured and processed with high accuracy and in a real-time manner. Bearing this in mind, now is the time to develop new sensing and processing techniques with high performance to capture high-dimensional data by leveraging recent advances in deep learning (DL)
L1-norm Regularized L1-norm Best-fit line problem
Background
Conventional Principal Component Analysis (PCA) is a widely used technique to reduce data dimension. PCA finds linear combinations of the original features capturing maximal variance of data via Singular Value Decomposition (SVD). However, SVD is sensitive to outliers, and often leads to high dimensional results. To address the issues, we propose a new method to estimate best-fit one-dimensional subspace, called l1-norm Regularized l1-norm.
Methods
In this article, we describe a method to fit a lower-dimensional subspace by approximate a non-linear, non-convex, non-smooth optimization problem called l1 regularized l1-norm Best- Fit Line problem; minimize a combination of the l1 error and of the l1 regularization. The procedure can be simply performed using ratios and sorting. Also ,we present applications in the area of video surveillance, where our methodology allows for background subtraction with jitters, illumination changes, and clutters.
Results
We compared our performance with SVD on synthetic data. The numerical results showed our algorithm successfully found a better principal component from a grossly corrupted data than SVD in terms of discordance. Moreover, our algorithm provided a sparser principal component than SVD. However, we expect it to be faster on multi-node environment.
Conclusions
This paper proposes a new algorithm able to generate a sparse best-fit subspace robust to outliers. The projected subspaces sought on non-contaminated data, differ little from that of traditional PCA. When subspaces are projected from contaminated data, it attain arguably significant both smaller discordance and lower dimension than that of traditional PCA.https://scholarscompass.vcu.edu/gradposters/1074/thumbnail.jp
Computationally Efficient Target Classification in Multispectral Image Data with Deep Neural Networks
Detecting and classifying targets in video streams from surveillance cameras
is a cumbersome, error-prone and expensive task. Often, the incurred costs are
prohibitive for real-time monitoring. This leads to data being stored locally
or transmitted to a central storage site for post-incident examination. The
required communication links and archiving of the video data are still
expensive and this setup excludes preemptive actions to respond to imminent
threats. An effective way to overcome these limitations is to build a smart
camera that transmits alerts when relevant video sequences are detected. Deep
neural networks (DNNs) have come to outperform humans in visual classifications
tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be
extended to make use of higher-dimensional input data such as multispectral
data. We explore this opportunity in terms of achievable accuracy and required
computational effort. To analyze the precision of DNNs for scene labeling in an
urban surveillance scenario we have created a dataset with 8 classes obtained
in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR
snapshot sensor to assess the potential of multispectral image data for target
classification. We evaluate several new DNNs, showing that the spectral
information fused together with the RGB frames can be used to improve the
accuracy of the system or to achieve similar accuracy with a 3x smaller
computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even
for scarcely occurring, but particularly interesting classes, such as cars, 75%
of the pixels are labeled correctly with errors occurring only around the
border of the objects. This high accuracy was obtained with a training set of
only 30 labeled images, paving the way for fast adaptation to various
application scenarios.Comment: Presented at SPIE Security + Defence 2016 Proc. SPIE 9997, Target and
Background Signatures I
- …