3 research outputs found

    Development Of A Neural Network Embedding For Quantifying Crack Pattern Similarity In Masonry Structures

    No full text
    The degree of similarity between damage patterns often correlates with the likelihood of having similar damage causes. Therefore, deciding whether crack patterns are similar is one of the key steps in assessing the conditions of masonry structures. To our knowledge, no literature has been published regarding masonry crack pattern similarity measures that would correlate well with assessment by structural engineers. Hence, currently, similarity assessments are solely performed by experts and require considerable time and effort. Moreover, it is expensive, limited by the availability of experts, and yields only qualitative answers. In this work, we propose an automated approach that has the potential to overcome the above shortcomings and perform comparably with experts. At its core is a deep neural network embedding that can be used to calculate a numerical distance between crack patterns on comparable façades. The embedding is obtained from fitting a deep neural network to perform a classification task; i.e., to predict the crack pattern archetype label from a crack pattern image. The network is fitted to synthetic crack patterns simulated using a statistics-based approach proposed in this work. The simulation process can account for important crack pattern characteristics such as crack location, orientation, and length. The embedding transforms a crack pattern (raster image) into a 64-dimensional real-valued vector space where the closeness between two vectors is calculated as the cosine of their angle. The proposed approach is tested on 2D façades with and without openings, and with synthetic crack patterns that consist of a single crack and multiple cracks

    Flexible image analysis for law enforcement agencies with deep neural networks to determine: where, who and what

    No full text
    International audienceDue to the increasing need for effective security measures and the integration of cameras in commercial products, a hugeamount of visual data is created today. Law enforcement agencies (LEAs) are inspecting images and videos to findradicalization, propaganda for terrorist organizations and illegal products on darknet markets. This is time consuming.Instead of an undirected search, LEAs would like to adapt to new crimes and threats, and focus only on data from specificlocations, persons or objects, which requires flexible interpretation of image content. Visual concept detection with deepconvolutional neural networks (CNNs) is a crucial component to understand the image content. This paper has fivecontributions. The first contribution allows image-based geo-localization to estimate the origin of an image. CNNs andgeotagged images are used to create a model that determines the location of an image by its pixel values. The secondcontribution enables analysis of fine-grained concepts to distinguish sub-categories in a generic concept. The proposedmethod encompasses data acquisition and cleaning and concept hierarchies. The third contribution is the recognition ofperson attributes (e.g., glasses or moustache) to enable query by textual description for a person. The person-attributeproblem is treated as a specific sub-task of concept classification. The fourth contribution is an intuitive image annotationtool based on active learning. Active learning allows users to define novel concepts flexibly and train CNNs with minimalannotation effort. The fifth contribution increases the flexibility for LEAs in the query definition by using query expansion.Query expansion maps user queries to known and detectable concepts. Therefore, no prior knowledge of the detectableconcepts is required for the users. The methods are validated on data with varying locations (popular and non-touristiclocations), varying person attributes (CelebA dataset), and varying number of annotations
    corecore