165 research outputs found
Pattern Recognition
Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition
Gaze-Based Human-Robot Interaction by the Brunswick Model
We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered
Recommended from our members
Randomized Methods for Computing Low-Rank Approximations of Matrices
Randomized sampling techniques have recently proved capable of efficiently solving many standard problems in linear algebra, and enabling computations at scales far larger than what was previously possible. The new algorithms are designed from the bottom up to perform well in modern computing environments where the expense of communication is the primary constraint. In extreme cases, the algorithms can even be made to work in a streaming environment where the matrix is not stored at all, and each element can be seen only once. The dissertation describes a set of randomized techniques for rapidly constructing a low-rank ap- proximation to a matrix. The algorithms are presented in a modular framework that first computes an approximation to the range of the matrix via randomized sampling. Secondly, the matrix is pro- jected to the approximate range, and a factorization (SVD, QR, LU, etc.) of the resulting low-rank matrix is computed via variations of classical deterministic methods. Theoretical performance bounds are provided. Particular attention is given to very large scale computations where the matrix does not fit in RAM on a single workstation. Algorithms are developed for the case where the original matrix must be stored out-of-core but where the factors of the approximation fit in RAM. Numerical examples are provided that perform Principal Component Analysis of a data set that is so large that less than one hundredth of it can fit in the RAM of a standard laptop computer. Furthermore, the dissertation presents a parallelized randomized scheme for computing a reduced rank Singular Value Decomposition. By parallelizing and distributing both the randomized sampling stage and the processing of the factors in the approximate factorization, the method requires an amount of memory per node which is independent of both dimensions of the input matrix. Numerical experiments are performed on Hadoop clusters of computers in Amazon\u27s Elastic Compute Cloud with up to 64 total cores. Finally, we directly compare the performance and accuracy of the randomized algorithm with the classical Lanczos method on extremely large, sparse matrices and substantiate the claim that randomized methods are superior in this environment
Principal Component Analysis
This book is aimed at raising awareness of researchers, scientists and engineers on the benefits of Principal Component Analysis (PCA) in data analysis. In this book, the reader will find the applications of PCA in fields such as image processing, biometric, face recognition and speech processing. It also includes the core concepts and the state-of-the-art methods in data analysis and feature extraction
TASI lectures on complex structures
These lecture notes give an introduction to a number of ideas and methods
that have been useful in the study of complex systems ranging from spin glasses
to D-branes on Calabi-Yau manifolds. Topics include the replica formalism,
Parisi's solution of the Sherrington-Kirkpatrick model, overlap order
parameters, supersymmetric quantum mechanics, D-brane landscapes and their
black hole duals.Comment: 109 pages, 16 figure
Proceedings of the 7th Sound and Music Computing Conference
Proceedings of the SMC2010 - 7th Sound and Music Computing Conference, July 21st - July 24th 2010
New Fundamental Technologies in Data Mining
The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining
Determining uncertainty in the functional quantities of fringe projection
Fringe projection systems can acquire a point-cloud of more than a million points in minutes while not needing to ever physically touch the measurement surface and can be assembled using relatively inexpensive off-the-shelf components. Fringe projection system can conduct measurements faster than their tactile counterparts and typically require less training to do so.
The disadvantage of using a fringe projection system is the measurements are less accurate than alternative tactile methods – and typical methods to obtain an uncertainty evaluation within fringe projection require a tactile system as a comparator. Anterior to any measurement, fringe projection systems undergo a calibration, whereby the set of functional quantities (defined in this thesis as the system parameters) are found that define the measurement (the point-cloud) from the indication (a set of images). The accuracy of the estimated parameters will define the accuracy of any measurements made by the system. The calibration process does not evaluate any uncertainty of the estimated system parameters – the accuracy of the estimation of the parameters remains unknown, as is their exact effect on the measurement result.
In this thesis, an investigation into the using the system parameters to evaluate the uncertainty of fringe projection measurements is made. Firstly, a method to localise the centre of ellipses in camera images with an uncertainty is given. This uncertainty is used to derive the uncertainty in the estimated system parameters. The uncertainty in the system parameters is tested by using the system parameters to measure known artefacts, a flatness artefact and two sphere-based artefacts, where the propagated uncertainty is tested against the measurement error. The accuracy of the system parameters are tested by comparing the measurement error of the measurements with measurements made on a commercial system, the GOM ATOS Core 300. In addition, an exhaustive study is undertaken on the calibration, including applying curvature, specificity and parameter stability tests on the non-linear regression used within calibration.
The sphere-based measurements were found to not be robust enough against measurement noise in fringe projection to be able to provide information on errors caused by the system parameters. This thesis raises questions as to the appropriateness of using sphere-based measurements to represent the performance of a fringe projection system. The flatness measurements made using the estimated system parameters achieved an accuracy of approximately 30 "ÎĽm" across a 300 "mm"Ă—140 "mm" flatness artefact, which is similar to measurements made by the commercial system. However, the estimated uncertainty was unable to explain all measurement discrepancy between the fringe projection measurements and the tactile measurements. The result specificity test indicated poor specificity of the mathematical model of fringe projection, namely the camera pinhole model with Brown-Conrady distortion. It is concluded that the level of accuracy of the mathematical model has become a limiting factor in the accuracy of fringe projection measurements, instead of the accuracy of the inputs to the calibration. Therefore, the uncertainty of the system parameters cannot be used to evaluate an uncertainty of a measurement made using a fringe projection system
- …