75 research outputs found

    Hierarchical improvement of foreground segmentation masks in background subtraction

    Full text link
    A plethora of algorithms have been defined for foreground segmentation, a fundamental stage for many computer vision applications. In this work, we propose a post-processing framework to improve foreground segmentation performance of background subtraction algorithms. We define a hierarchical framework for extending segmented foreground pixels to undetected foreground object areas and for removing erroneously segmented foreground. Firstly, we create a motion-aware hierarchical image segmentation of each frame that prevents merging foreground and background image regions. Then, we estimate the quality of the foreground mask through the fitness of the binary regions in the mask and the hierarchy of segmented regions. Finally, the improved foreground mask is obtained as an optimal labeling by jointly exploiting foreground quality and spatial color relations in a pixel-wise fully-connected Conditional Random Field. Experiments are conducted over four large and heterogeneous datasets with varied challenges (CDNET2014, LASIESTA, SABS and BMC) demonstrating the capability of the proposed framework to improve background subtraction resultsThis work was partially supported by the Spanish Government (HAVideo, TEC2014-53176-R

    Logic learning and optimized drawing: two hard combinatorial problems

    Get PDF
    Nowadays, information extraction from large datasets is a recurring operation in countless fields of applications. The purpose leading this thesis is to ideally follow the data flow along its journey, describing some hard combinatorial problems that arise from two key processes, one consecutive to the other: information extraction and representation. The approaches here considered will focus mainly on metaheuristic algorithms, to address the need for fast and effective optimization methods. The problems studied include data extraction instances, as Supervised Learning in Logic Domains and the Max Cut-Clique Problem, as well as two different Graph Drawing Problems. Moreover, stemming from these main topics, other additional themes will be discussed, namely two different approaches to handle Information Variability in Combinatorial Optimization Problems (COPs), and Topology Optimization of lightweight concrete structures

    A Multi-scale colour and Keypoint Density-based Approach for Visual Saliency Detection.

    Get PDF
    In the first seconds of observation of an image, several visual attention processes are involved in the identification of the visual targets that pop-out from the scene to our eyes. Saliency is the quality that makes certain regions of an image stand out from the visual field and grab our attention. Saliency detection models, inspired by visual cortex mechanisms, employ both colour and luminance features. Furthermore, both locations of pixels and presence of objects influence the Visual Attention processes. In this paper, we propose a new saliency method based on the combination of the distribution of interest points in the image with multiscale analysis, a centre bias module and a machine learning approach. We use perceptually uniform colour spaces to study how colour impacts on the extraction of saliency. To investigate eye-movements and assess the performances of saliency methods over object-based images, we conduct experimental sessions on our dataset ETTO (Eye Tracking Through Objects). Experiments show our approach to be accurate in the detection of saliency concerning state-of-the-art methods and accessible eye-movement datasets. The performances over object-based images are excellent and consistent on generic pictures. Besides, our work reveals interesting findings on some relationships between saliency and perceptually uniform colour spaces

    Computing and Information Science

    Full text link
    Cornell University Courses of Study Vol. 98 2006/200

    Computation in Complex Networks

    Get PDF
    Complex networks are one of the most challenging research focuses of disciplines, including physics, mathematics, biology, medicine, engineering, and computer science, among others. The interest in complex networks is increasingly growing, due to their ability to model several daily life systems, such as technology networks, the Internet, and communication, chemical, neural, social, political and financial networks. The Special Issue “Computation in Complex Networks" of Entropy offers a multidisciplinary view on how some complex systems behave, providing a collection of original and high-quality papers within the research fields of: • Community detection • Complex network modelling • Complex network analysis • Node classification • Information spreading and control • Network robustness • Social networks • Network medicin

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    From image co-segmentation to discrete optimization in computer vision - the exploration on graphical model, statistical physics, energy minimization, and integer programming

    Get PDF
    This dissertation aims to explore the ideas and frameworks for solving the discrete optimization problem in computer vision. Much of the work is inspired by the study of the image co-segmentation problem. It is through the research on this topic that the author has become very familiar with the graphical model and energy minimization point of view in handling computer vision problems - that is, how to combine the local information with the neighborhood interaction information in the graphical system for the inference; and also the author has come to the realization that many problems in and beyond computer vision can be solved in that way. At the beginning of this dissertation, we first give a comprehensive background review on graphical model, energy minimization, integer programming, as well as all their connections with the fundamental statistical physics. We aim to review the various aspects of the concepts, models, algorithms, etc., in a systematic way and from a different perspective. For instance, we review the correspondences between the commonly used unary/binary energy objective terms in computer vision with those of the fundamental Ising model in statistical physics; and also we summarize several widely used discrete energy minimization algorithms in computer vision under a unified framework in statistical physics; in addition we stress the close connections between the graphical model energy minimization and the integer programming problems, and especially we point out the central role of Mixed-Integer Quadratic Programming in discrete optimization in and beyond computer vision. Moreover, we explore the relationship between integer programming and energy minimization experimentally. We test integer programming methods on randomly generated energy formulations (as those would appear in computer vision problems), and similarly energy minimization methods on the integer programming problem of Graph K-coloring. Therefore we can easily compare the optimization performance of various methods (no matter whether they are designed for energy minimization or integer programming) on one platform. We come to the conclusion that sharing the methods across the fields (energy minimization in computer vision and integer programming in applied mathematics) is very helpful and beneficial. Based on the statistical physics inspired energy minimization framework we obtained, we formulate the task of density based clustering into this formulation. Energy is defined in terms of inhomogeneity in local point density. A sequence of energy minima are found to recursively partition the points, and thus we find a hierarchical embedding of clusters that are increasingly homogeneous in density. Energy is expressed as the sum of a unary (data) term and a binary (smoothness) term. The only parameter required to be specified by the user is a homogeneity criterion - the degree of acceptable fluctuation in density within a cluster. Thus, we do not have to specify, for example, the number of clusters present. Disjoint clusters with the same density are identified separately. Experimental results show that our method is able to handle clusters of different shapes, sizes and densities. We present the performance of our approach using the energy optimization algorithms ICM, LBP, Graph-cut, and Mean field theory algorithm. We also show that the family of commonly used spectral, graph clustering algorithms (such as Normalized-cut) is a special case of our formulation, using only the binary energy term while ignoring the unary term. After all the discussions above on the general framework for solving the discrete optimization problem in computer vision, the dissertation then focuses on the study of image co-segmentation, which is in fact carried out before the above topics. Image co-segmentation is the task of automatically discovering, locating and segmenting some unknown common object in a set of images. It has become a popular research topic in computer vision during recent years. The unsupervised nature is an important characteristic of the problem; i.e., the common object is a priori unknown. Moreover, the common object may be subject to viewpoint change, lighting condition change, occlusion, and deformation across the images; all these conditions make the co-segmentation task very challenging. In this part of the study we focus on the research of image co-segmentation and propose various approaches for addressing this problem. Most existing co-segmentation methods focus on co-segmenting the images with a very dominant common object, where the background interference is very limited. Such images are not realistic for the co-segmentation task, since in practice we may always encounter images with very rich and complex content where the common object is not dominant and appears simultaneously along with a large number of other objects. In this work we aim to address the image co-segmentation problem on this kind of image that cannot be handled properly with many previous methods. Two distinct approaches have been proposed in this work for image co-segmentation; the key difference lies in the method of common object discovery. The first approach is a "topology" based approach (also called a "point-region" approach) while the second one is a "sparse optimization" based approach. Specifically, in the first approach we combine the image key point features with the segment features together to discover the common object, while relying on the local topology consistency of both key point and segment layout for the robust recognition. The obtained initial foreground (the common object) in each image is refined through graphical model energy minimization based on a global appearance model extracted from the entire image dataset. The second approach is inspired by sparse optimization techniques; in this approach we use a sparse approximation scheme to find the optimal correspondence of the segments in two images as the initial estimation of the common object, based on some linear additive features extracted from the segments. In both proposed approaches, we emphasize the exploration of inter-image information in all steps of the algorithms; therefore, the common object need not to be dominant or salient in each individual image, as long as it is "common" across the image set. Extensive experiments have been conducted in this study to validate the performance of the proposed approaches. We carry out experiments on the widely used benchmark datasets for image co-segmentation, including iCoseg dataset, the multi-view co-segmentation dataset, Oxford flower dataset and so forth. Besides the above datasets, in order to better evaluate the performance on the rich and complex images with non-dominant common object, we also propose a new dataset in this work called richCoseg. Experiments are also conducted on this new dataset and qualitative and quantitative comparisons with the recent methods are provided. Finally, this dissertation also discusses very briefly some other vision problems the author has studied in previously published works

    A Machine Learning Enhanced Scheme for Intelligent Network Management

    Get PDF
    The versatile networking services bring about huge influence on daily living styles while the amount and diversity of services cause high complexity of network systems. The network scale and complexity grow with the increasing infrastructure apparatuses, networking function, networking slices, and underlying architecture evolution. The conventional way is manual administration to maintain the large and complex platform, which makes effective and insightful management troublesome. A feasible and promising scheme is to extract insightful information from largely produced network data. The goal of this thesis is to use learning-based algorithms inspired by machine learning communities to discover valuable knowledge from substantial network data, which directly promotes intelligent management and maintenance. In the thesis, the management and maintenance focus on two schemes: network anomalies detection and root causes localization; critical traffic resource control and optimization. Firstly, the abundant network data wrap up informative messages but its heterogeneity and perplexity make diagnosis challenging. For unstructured logs, abstract and formatted log templates are extracted to regulate log records. An in-depth analysis framework based on heterogeneous data is proposed in order to detect the occurrence of faults and anomalies. It employs representation learning methods to map unstructured data into numerical features, and fuses the extracted feature for network anomaly and fault detection. The representation learning makes use of word2vec-based embedding technologies for semantic expression. Next, the fault and anomaly detection solely unveils the occurrence of events while failing to figure out the root causes for useful administration so that the fault localization opens a gate to narrow down the source of systematic anomalies. The extracted features are formed as the anomaly degree coupled with an importance ranking method to highlight the locations of anomalies in network systems. Two types of ranking modes are instantiated by PageRank and operation errors for jointly highlighting latent issue of locations. Besides the fault and anomaly detection, network traffic engineering deals with network communication and computation resource to optimize data traffic transferring efficiency. Especially when network traffic are constrained with communication conditions, a pro-active path planning scheme is helpful for efficient traffic controlling actions. Then a learning-based traffic planning algorithm is proposed based on sequence-to-sequence model to discover hidden reasonable paths from abundant traffic history data over the Software Defined Network architecture. Finally, traffic engineering merely based on empirical data is likely to result in stale and sub-optimal solutions, even ending up with worse situations. A resilient mechanism is required to adapt network flows based on context into a dynamic environment. Thus, a reinforcement learning-based scheme is put forward for dynamic data forwarding considering network resource status, which explicitly presents a promising performance improvement. In the end, the proposed anomaly processing framework strengthens the analysis and diagnosis for network system administrators through synthesized fault detection and root cause localization. The learning-based traffic engineering stimulates networking flow management via experienced data and further shows a promising direction of flexible traffic adjustment for ever-changing environments

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
    • …
    corecore