12,096 research outputs found

    Divide and Fuse: A Re-ranking Approach for Person Re-identification

    Full text link
    As re-ranking is a necessary procedure to boost person re-identification (re-ID) performance on large-scale datasets, the diversity of feature becomes crucial to person reID for its importance both on designing pedestrian descriptions and re-ranking based on feature fusion. However, in many circumstances, only one type of pedestrian feature is available. In this paper, we propose a "Divide and use" re-ranking framework for person re-ID. It exploits the diversity from different parts of a high-dimensional feature vector for fusion-based re-ranking, while no other features are accessible. Specifically, given an image, the extracted feature is divided into sub-features. Then the contextual information of each sub-feature is iteratively encoded into a new feature. Finally, the new features from the same image are fused into one vector for re-ranking. Experimental results on two person re-ID benchmarks demonstrate the effectiveness of the proposed framework. Especially, our method outperforms the state-of-the-art on the Market-1501 dataset.Comment: Accepted by BMVC201

    Application of Fractal and Wavelets in Microcalcification Detection

    Get PDF
    Breast cancer has been recognized as one or the most frequent, malignant tumors in women, clustered microcalcifications in mammogram images has been widely recognized as an early sign of breast cancer. This work is devote to review the application of Fractal and Wavelets in microcalcifications detection

    A graph theoretic approach to scene matching

    Get PDF
    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors

    Hi tech microeconomics and information non-intensive calculi

    Get PDF
    The article establishes link between the contributions made to the study of hi tech phenomena. It analyzes the evolution undergone by studies on the topic of the knowledge economics (HI-TECH) process carried out by different disciplines (hard and soft sciences – sociology, ecology etc.) from the point of view of the objectives they pursue. The attentions are concentrated on analysis of applicable mathematical tools used to develop realistic formal models. Information intensity is defined as the amount of information which is needed for the realistic application of a corresponding formal tool. High information intensity is desirable because it influences the model accuracy. Low information intensity is preferred when high information intensity requires more information items than are available and this is usually the case in knowledge engineering. Fuzzy models seem to be a useful extension of formal tool used in hi tech microeconomics. However, even fuzzy sets could be prohibitively information intensive. Therefore the range of available formal tools must be considerably broader. This paper introduces qualitative and semiqualitative models and rough sets. Each formal tool is briefly characterized

    Deep Epitome for Unravelling Generalized Hamming Network: A Fuzzy Logic Interpretation of Deep Learning

    Full text link
    This paper gives a rigorous analysis of trained Generalized Hamming Networks(GHN) proposed by Fan (2017) and discloses an interesting finding about GHNs, i.e., stacked convolution layers in a GHN is equivalent to a single yet wide convolution layer. The revealed equivalence, on the theoretical side, can be regarded as a constructive manifestation of the universal approximation theorem Cybenko(1989); Hornik (1991). In practice, it has profound and multi-fold implications. For network visualization, the constructed deep epitomes at each layer provide a visualization of network internal representation that does not rely on the input data. Moreover, deep epitomes allows the direct extraction of features in just one step, without resorting to regularized optimizations used in existing visualization tools.Comment: 25 pages, 14 figure

    An Overview of Data Mining Applications in Oil and Gas Exploration: Structural Geology and Reservoir Property-Issues

    Full text link
    Low oil prices have motivated energy executives to look into cost reduction in their supply chains more seriously. To this end, a new technology that is experimentally considered in hydrocarbon exploration is data mining. There are two major categories of geoscientific problems in which data mining is applied: structural geology and reservoir property-issues. This research overviews these categories by considering a variety of interesting works in each of them. The result is an understanding of the specific geoscientific problems studied in the literature, along with the relative data mining methods. This way, this work tries to lay the ground for a mutual understanding on oil and gas exploration between the data miners and the geoscientists.Comment: Part of DM4OG 2017 proceedings (arXiv:1705.03451

    Big data analytics:Computational intelligence techniques and application areas

    Get PDF
    Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment

    Video Event Recognition for Surveillance Applications (VERSA)

    Full text link
    VERSA provides a general-purpose framework for defining and recognizing events in live or recorded surveillance video streams. The approach for event recognition in VERSA is using a declarative logic language to define the spatial and temporal relationships that characterize a given event or activity. Doing so requires the definition of certain fundamental spatial and temporal relationships and a high-level syntax for specifying frame templates and query parameters. Although the handling of uncertainty in the current VERSA implementation is simplistic, the language and architecture is amenable to extending using Fuzzy Logic or similar approaches. VERSA's high-level architecture is designed to work in XML-based, services- oriented environments. VERSA can be thought of as subscribing to the XML annotations streamed by a lower-level video analytics service that provides basic entity detection, labeling, and tracking. One or many VERSA Event Monitors could thus analyze video streams and provide alerts when certain events are detected.Comment: Master's Thesis, University of Nebraska at Omaha, 200
    corecore