897,872 research outputs found

    Modeling Nonequilibrium Phase Transitions and Critical Behavior in Complex Systems

    Full text link
    We comment on some recent, yet unpublished results concerning instabilities in complex systems and their applications. In particular, we briefly describe main observations during extensive computer simulations of two lattice nonequilibrium models. One exhibits robust and efficient processes of pattern recognition under synaptic coherent activity; the second example exhibits interesting critical behavior and simulates nucleation and spinodal decomposition processes in driven fluids.Comment: 6 pages, 4 figure

    Дослідження методів розпізнавання та класифікації мовленнєвого сигналу

    Get PDF
    Diploma work on theme «Research methods of identification and classification of speech signal» by student Semkiv Andrii Volodymyrovych. – Ternopil Ivan Pul'uj National Technical University, Faculty of Computer Information Systems and Software Engineering, Software engineering department, group SPm-61 // Ternopil, 2017. Pages. – 101, pictures. – 13, tables. – 3, slides – 13, add. – 4, bibl.ref. – 48. The aim of the thesis is to study the methods of recognition and classification of the speech signal using computer modeling techniques and process incoming information according to mathematical models developed using Fourier transform. Based on the analysis the advantages and disadvantages of different approaches and methods. Methods and software used in performing of system development: the programming language Java and its libraries, development environment – Netbeans IDE, development environment and simulation of MatLab, flexible methodology (Agile) of software development. Result of work is the optimal method of recognition and classification of the speech signal. In a module software system implemented a series of algorithms that optymizovuyut shortcomings of existing methods.. Keywords: MATHEMATICAL MODEL, COMPUTER SIMULATION, SPEECH SIGNALS, SOFTWARE SYSTEMS, ALGORITHMS, FOURIER TRANSFORM

    On Acquisition and Analysis of a Dataset Comprising of Gait, Ear and Semantic data

    No full text
    In outdoor scenarios such as surveillance where there is very little control over the environments, complex computer vision algorithms are often required for analysis. However constrained environments, such as walkways in airports where the surroundings and the path taken by individuals can be controlled, provide an ideal application for such systems. Figure 1.1 depicts an idealised constrained environment. The path taken by the subject is restricted to a narrow path and once inside is in a volume where lighting and other conditions are controlled to facilitate biometric analysis. The ability to control the surroundings and the flow of people greatly simplifes the computer vision task, compared to typical unconstrained environments. Even though biometric datasets with greater than one hundred people are increasingly common, there is still very little known about the inter and intra-subject variation in many biometrics. This information is essential to estimate the recognition capability and limits of automatic recognition systems. In order to accurately estimate the inter- and the intra- class variance, substantially larger datasets are required [40]. Covariates such as facial expression, headwear, footwear type, surface type and carried items are attracting increasing attention; although considering the potentially large impact on an individuals biometrics, large trials need to be conducted to establish how much variance results. This chapter is the first description of the multibiometric data acquired using the University of Southampton's Multi-Biometric Tunnel [26, 37]; a biometric portal using automatic gait, face and ear recognition for identification purposes. The tunnel provides a constrained environment and is ideal for use in high throughput security scenarios and for the collection of large datasets. We describe the current state of data acquisition of face, gait, ear, and semantic data and present early results showing the quality and range of data that has been collected. The main novelties of this dataset in comparison with other multi-biometric datasets are: 1. gait data exists for multiple views and is synchronised, allowing 3D reconstruction and analysis; 2. the face data is a sequence of images allowing for face recognition in video; 3. the ear data is acquired in a relatively unconstrained environment, as a subject walks past; and 4. the semantic data is considerably more extensive than has been available previously. We shall aim to show the advantages of this new data in biometric analysis, though the scope for such analysis is considerably greater than time and space allows for here

    DIOR: Dataset for Indoor-Outdoor Reidentification -- Long Range 3D/2D Skeleton Gait Collection Pipeline, Semi-Automated Gait Keypoint Labeling and Baseline Evaluation Methods

    Full text link
    In recent times, there is an increased interest in the identification and re-identification of people at long distances, such as from rooftop cameras, UAV cameras, street cams, and others. Such recognition needs to go beyond face and use whole-body markers such as gait. However, datasets to train and test such recognition algorithms are not widely prevalent, and fewer are labeled. This paper introduces DIOR -- a framework for data collection, semi-automated annotation, and also provides a dataset with 14 subjects and 1.649 million RGB frames with 3D/2D skeleton gait labels, including 200 thousands frames from a long range camera. Our approach leverages advanced 3D computer vision techniques to attain pixel-level accuracy in indoor settings with motion capture systems. Additionally, for outdoor long-range settings, we remove the dependency on motion capture systems and adopt a low-cost, hybrid 3D computer vision and learning pipeline with only 4 low-cost RGB cameras, successfully achieving precise skeleton labeling on far-away subjects, even when their height is limited to a mere 20-25 pixels within an RGB frame. On publication, we will make our pipeline open for others to use
    corecore