1,828 research outputs found

    Sparse Modeling for Image and Vision Processing

    Get PDF
    In recent years, a large amount of multi-disciplinary research has been conducted on sparse models and their applications. In statistics and machine learning, the sparsity principle is used to perform model selection---that is, automatically selecting a simple model among a large collection of them. In signal processing, sparse coding consists of representing data with linear combinations of a few dictionary elements. Subsequently, the corresponding tools have been widely adopted by several scientific communities such as neuroscience, bioinformatics, or computer vision. The goal of this monograph is to offer a self-contained view of sparse modeling for visual recognition and image processing. More specifically, we focus on applications where the dictionary is learned and adapted to data, yielding a compact representation that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics and Visio

    Person Re-Identification Techniques for Intelligent Video Surveillance Systems

    Get PDF
    Nowadays, intelligent video-surveillance is one of the most active research fields in com- puter vision and machine learning techniques which provides useful tools for surveillance operators and forensic video investigators. Person re-identification is among these tools; it consists of recognizing whether an individual has already been observed over a network of cameras. This tool can also be employed in various possible applications, e.g., off-line retrieval of all the video-sequences showing an individual of interest whose image is given as query, or on-line pedestrian tracking over multiple cameras. For the off-line retrieval applications, one of the goals of person re-identification systems is to support video surveillance operators and forensic investigators to find an individual of interest in videos acquired by a network of non-overlapping cameras. This is attained by sorting images of previously ob- served individuals for decreasing values of their similarity with a given probe individual. This task is typically achieved by exploiting the clothing appearance, in which a classical biometric methods like the face recognition is impeded to be practical in real-world video surveillance scenarios, because of low-quality of acquired images. Existing clothing appearance descriptors, together with their similarity measures, are mostly aimed at im- proving ranking quality. These methods usually are employed as part-based body model in order to extract image signature that might be independently treated in different body parts (e.g. torso and legs). Whereas, it is a must that a re-identification model to be robust and discriminate on individual of interest recognition, the issue of the processing time might also be crucial in terms of tackling this task in real-world scenarios. This issue can be also seen from two different point of views such as processing time to construct a model (aka descriptor generation); which usually can be done off-line, and processing time to find the correct individual from bunch of acquired video frames (aka descriptor matching); which is the real-time procedure of the re-identification systems. This thesis addresses the issue of processing time for descriptor matching, instead of im- proving ranking quality, which is also relevant in practical applications involving interaction with human operators. It will be shown how a trade-off between processing time and rank- ing quality, for any given descriptor, can be achieved through a multi-stage ranking approach inspired by multi-stage approaches to classification problems presented in pattern recogni- tion area, which it is further adapting to the re-identification task as a ranking problem. A discussion of design criteria is therefore presented as so-called multi-stage re-identification systems, and evaluation of the proposed approach carry out on three benchmark data sets, using four state-of-the-art descriptors. Additionally, by concerning to the issue of processing time, typical dimensional reduction methods are studied in terms of reducing the processing time of a descriptor where a high-dimensional feature space is generated by a specific person re-identification descriptor. An empirically experimental result is also presented in this case, and three well-known feature reduction methods are applied them on two state-of-the-art descriptors on two benchmark data sets

    Robust Network Topology Inference and Processing of Graph Signals

    Full text link
    The abundance of large and heterogeneous systems is rendering contemporary data more pervasive, intricate, and with a non-regular structure. With classical techniques facing troubles to deal with the irregular (non-Euclidean) domain where the signals are defined, a popular approach at the heart of graph signal processing (GSP) is to: (i) represent the underlying support via a graph and (ii) exploit the topology of this graph to process the signals at hand. In addition to the irregular structure of the signals, another critical limitation is that the observed data is prone to the presence of perturbations, which, in the context of GSP, may affect not only the observed signals but also the topology of the supporting graph. Ignoring the presence of perturbations, along with the couplings between the errors in the signal and the errors in their support, can drastically hinder estimation performance. While many GSP works have looked at the presence of perturbations in the signals, much fewer have looked at the presence of perturbations in the graph, and almost none at their joint effect. While this is not surprising (GSP is a relatively new field), we expect this to change in the upcoming years. Motivated by the previous discussion, the goal of this thesis is to advance toward a robust GSP paradigm where the algorithms are carefully designed to incorporate the influence of perturbations in the graph signals, the graph support, and both. To do so, we consider different types of perturbations, evaluate their disruptive impact on fundamental GSP tasks, and design robust algorithms to address them.Comment: Dissertatio

    Coded Slotted ALOHA: A Graph-Based Method for Uncoordinated Multiple Access

    Full text link
    In this paper, a random access scheme is introduced which relies on the combination of packet erasure correcting codes and successive interference cancellation (SIC). The scheme is named coded slotted ALOHA. A bipartite graph representation of the SIC process, resembling iterative decoding of generalized low-density parity-check codes over the erasure channel, is exploited to optimize the selection probabilities of the component erasure correcting codes via density evolution analysis. The capacity (in packets per slot) of the scheme is then analyzed in the context of the collision channel without feedback. Moreover, a capacity bound is developed and component code distributions tightly approaching the bound are derived.Comment: The final version to appear in IEEE Trans. Inf. Theory. 18 pages, 10 figure

    CASE ID DETECTION IN UNLABEL LED EVENT LOGS FOR PROCESS MINING

    Get PDF
    In the realm of data science, event logs serve as valuable sources of information, capturing sequences of events or activities in various processes. However, when dealing with unlabelled event logs, the absence of a designated Case ID column poses a critical challenge, hindering the understanding of relationships and dependencies among events within a case or process. Motivated by the increasing adoption of data-driven decision-making and the need for efficient data analysis techniques, this master’s project presents the "Case ID Column Identification Library" project. This library aims to streamline data preprocessing and enhance the efficiency of subsequent data analysis tasks by automatically identifying the Case ID column in unlabelled event logs. The project’s objective is to develop a versatile and user-friendly library that incorporates multiple methods, including a Convolutional Neural Network (CNN) and a parameterizable heuristic approach, to accurately identify the Case ID column. By offering flexibility to users, they can choose individual methods or a combination of methods based on their specific requirements, along with adjusting heuristic-based formula coefficients and settings for fine-tuning the identification process. This report presents a comprehensive exploration of related work, methodology, data understanding, methods for Case ID column identification, software library development, and experimental results. The results demonstrate the effectiveness of the proposed methods and their implications for decision support systems
    • …
    corecore