6,353 research outputs found
NEFI: Network Extraction From Images
Networks and network-like structures are amongst the central building blocks
of many technological and biological systems. Given a mathematical graph
representation of a network, methods from graph theory enable a precise
investigation of its properties. Software for the analysis of graphs is widely
available and has been applied to graphs describing large scale networks such
as social networks, protein-interaction networks, etc. In these applications,
graph acquisition, i.e., the extraction of a mathematical graph from a network,
is relatively simple. However, for many network-like structures, e.g. leaf
venations, slime molds and mud cracks, data collection relies on images where
graph extraction requires domain-specific solutions or even manual. Here we
introduce Network Extraction From Images, NEFI, a software tool that
automatically extracts accurate graphs from images of a wide range of networks
originating in various domains. While there is previous work on graph
extraction from images, theoretical results are fully accessible only to an
expert audience and ready-to-use implementations for non-experts are rarely
available or insufficiently documented. NEFI provides a novel platform allowing
practitioners from many disciplines to easily extract graph representations
from images by supplying flexible tools from image processing, computer vision
and graph theory bundled in a convenient package. Thus, NEFI constitutes a
scalable alternative to tedious and error-prone manual graph extraction and
special purpose tools. We anticipate NEFI to enable the collection of larger
datasets by reducing the time spent on graph extraction. The analysis of these
new datasets may open up the possibility to gain new insights into the
structure and function of various types of networks. NEFI is open source and
available http://nefi.mpi-inf.mpg.de
Characterisation of spatial network-like patterns from junctions' geometry
We propose a new method for quantitative characterization of spatial
network-like patterns with loops, such as surface fracture patterns, leaf vein
networks and patterns of urban streets. Such patterns are not well
characterized by purely topological estimators: also patterns that both look
different and result from different morphogenetic processes can have similar
topology. A local geometric cue -the angles formed by the different branches at
junctions- can complement topological information and allow to quantify the
large scale spatial coherence of the pattern. For patterns that grow over time,
such as fracture lines on the surface of ceramics, the rank assigned by our
method to each individual segment of the pattern approximates the order of
appearance of that segment. We apply the method to various network-like
patterns and we find a continuous but sharp dichotomy between two classes of
spatial networks: hierarchical and homogeneous. The first class results from a
sequential growth process and presents large scale organization, the latter
presents local, but not global organization.Comment: version 2, 14 page
Parallelization for image processing algorithms based chain and mid-crack codes
Freeman chain code is a widely-used description for a contour image. Another mid-crack code algorithm was proposed as a more precise method for image representation. We have developed a coding algorithm which is suitable to generate either chain code description or mid-crack code description by switching between two different tables. Since there is a strong urge to use parallel processing in image related problems, a parallel coding algorithm is implemented. This algorithm is developed on a pyramid architecture and a N cube architecture. Using link-list data structure and neighbor identification, the algorithm gains efficiency because no sorting or neighborhood pairing is needed.
In this dissertation, the local symmetry deficiency (LSD) computation to calculate the local k-symmetry is embedded in the coding algorithm. Therefore, we can finish the code extraction and the LSD computation in one pass. The embedding process is not limited to the k-symmetry algorithm and has the capability of parallelism.
An adaptive quadtree to chain code conversion algorithm is also presented. This algorithm is designed for constructing the chain codes of the resulting quadtree from the boolean operation of two quadtrees by using the chain codes of the original one. The algorithm has the parallelism and is ready to be implemented on a pyramid architecture.
Our parallel processing approach can be viewed as a parallelization paradigm - a template to embed image processing algorithms in the chain coding process and to implement them in a parallel approach
Overcoming Overfitting Challenges with HOG Feature Extraction and XGBoost-Based Classification for Concrete Crack Monitoring
This study proposes a method that combines Histogram of Oriented Gradients (HOG) feature extraction and Extreme Gradient Boosting (XGBoost) classification to resolve the challenges of concrete crack monitoring. The purpose of the study is to address the common issue of overfitting in machine learning models. The research uses a dataset of 40,000 images of concrete cracks and HOG feature extraction to identify relevant patterns. Classification is performed using the ensemble method XGBoost, with a focus on optimizing its hyperparameters. This study evaluates the efficacy of XGBoost in comparison to other ensemble methods, such as Random Forest and AdaBoost. XGBoost outperforms the other algorithms in terms of accuracy, precision, recall, and F1-score, as demonstrated by the results. The proposed method obtains an accuracy of 96.95% with optimized hyperparameters, a recall of 96.10%, a precision of 97.90%, and an F1-score of 97%. By optimizing the number of trees hyperparameter, 1200 trees yield the greatest performance. The results demonstrate the efficacy of HOG-based feature extraction and XGBoost for accurate and dependable classification of concrete fractures, overcoming the overfitting issues that are typically encountered in such tasks
DeepSolarEye: Power Loss Prediction and Weakly Supervised Soiling Localization via Fully Convolutional Networks for Solar Panels
The impact of soiling on solar panels is an important and well-studied
problem in renewable energy sector. In this paper, we present the first
convolutional neural network (CNN) based approach for solar panel soiling and
defect analysis. Our approach takes an RGB image of solar panel and
environmental factors as inputs to predict power loss, soiling localization,
and soiling type. In computer vision, localization is a complex task which
typically requires manually labeled training data such as bounding boxes or
segmentation masks. Our proposed approach consists of specialized four stages
which completely avoids localization ground truth and only needs panel images
with power loss labels for training. The region of impact area obtained from
the predicted localization masks are classified into soiling types using the
webly supervised learning. For improving localization capabilities of CNNs, we
introduce a novel bi-directional input-aware fusion (BiDIAF) block that
reinforces the input at different levels of CNN to learn input-specific feature
maps. Our empirical study shows that BiDIAF improves the power loss prediction
accuracy by about 3% and localization accuracy by about 4%. Our end-to-end
model yields further improvement of about 24% on localization when learned in a
weakly supervised manner. Our approach is generalizable and showed promising
results on web crawled solar panel images. Our system has a frame rate of 22
fps (including all steps) on a NVIDIA TitanX GPU. Additionally, we collected
first of it's kind dataset for solar panel image analysis consisting 45,000+
images.Comment: Accepted for publication at WACV 201
Semantic segmentation and photogrammetry of crowdsourced images to monitor historic facades
Crowdsourced images hold information could potentially be used to remotely monitor heritage sites, and reduce human and capital resources devoted to on-site inspections. This article proposes a combination of semantic image segmentation and photogrammetry to monitor changes in built heritage sites. In particular, this article focuses on segmenting potentially damaging plants from the surrounding stone masonry and other image elements. The method compares different backend models and two model architectures: (i) a one-stage model that segments seven classes within the image, and (ii) a two-stage model that uses the results from the first stage to refine a binary segmentation for the plant class. The final selected model can achieve an overall IoU of 66.9% for seven classes (54.6% for one-stage plant, 56.2% for two-stage plant). Further, the segmentation output is combined with photogrammetry to build a 3D segmented model to measure the area of biological growth. Lastly, the main findings from this paper are: (i) With the help of transfer learning and proper choice of model architecture, image segmentation can be easily applied to analyze crowdsourcing data. (ii) Photogrammetry can be combined with image segmentation to alleviate image distortions for monitoring purpose. (iii) Beyond the measurement of plant area, this method has the potential to be easily transferred into other tasks, such as monitoring cracks and erosion, or as a masking tool in the photogrammetry workflow
Quantification of damage evolution in masonry walls subjected to induced seismicity
This paper aims to quantify the evolution of damage in masonry walls under induced seismicity. A damage index equation, which is a function of the evolution of shear slippage and opening of the mortar joints, as well as of the drift ratio of masonry walls, was proposed herein. Initially, a dataset of experimental tests from in-plane quasi-static and cyclic tests on masonry walls was considered. The experimentally obtained crack patterns were investigated and their correlation with damage propagation was studied. Using a software based on the Distinct Element Method, a numerical model was developed and validated against full-scale experimental tests obtained from the literature. Wall panels representing common typologies of house façades of unreinforced masonry buildings in Northern Europe i.e. near the Groningen gas field in the Netherlands, were numerically investigated. The accumulated damage within the seismic response of the masonry walls was investigated by means of representative harmonic load excitations and an incremental dynamic analysis based on induced seismicity records from Groningen region. The ability of this index to capture different damage situations is demonstrated. The proposed methodology could also be applied to quantify damage and accumulation in masonry during strong earthquakes and aftershocks too
Quantification of damage evolution in masonry walls subjected to induced seismicity
This paper aims to quantify the evolution of damage in masonry walls under induced seismicity. A damage index equation, which is a function of the evolution of shear slippage and opening of the mortar joints, as well as of the drift ratio of masonry walls, was proposed herein. Initially, a dataset of experimental tests from in-plane quasi-static and cyclic tests on masonry walls was considered. The experimentally obtained crack patterns were investigated and their correlation with damage propagation was studied. Using a software based on the Distinct Element Method, a numerical model was developed and validated against full-scale experimental tests obtained from the literature. Wall panels representing common typologies of house façades of unreinforced masonry buildings in Northern Europe i.e. near the Groningen gas field in the Netherlands, were numerically investigated. The accumulated damage within the seismic response of the masonry walls was investigated by means of representative harmonic load excitations and an incremental dynamic analysis based on induced seismicity records from Groningen region. The ability of this index to capture different damage situations is demonstrated. The proposed methodology could also be applied to quantify damage and accumulation in masonry during strong earthquakes and aftershocks too.The work presented in this paper is supported by the – Seismic Monitoring, Design and Strengthening For thE GrOningen Region, Project number: RAAK.MKB09.021.Peer ReviewedPostprint (published version
OctNetFusion: Learning Depth Fusion from Data
In this paper, we present a learning based approach to depth fusion, i.e.,
dense 3D reconstruction from multiple depth images. The most common approach to
depth fusion is based on averaging truncated signed distance functions, which
was originally proposed by Curless and Levoy in 1996. While this method is
simple and provides great results, it is not able to reconstruct (partially)
occluded surfaces and requires a large number frames to filter out sensor noise
and outliers. Motivated by the availability of large 3D model repositories and
recent advances in deep learning, we present a novel 3D CNN architecture that
learns to predict an implicit surface representation from the input depth maps.
Our learning based method significantly outperforms the traditional volumetric
fusion approach in terms of noise reduction and outlier suppression. By
learning the structure of real world 3D objects and scenes, our approach is
further able to reconstruct occluded regions and to fill in gaps in the
reconstruction. We demonstrate that our learning based approach outperforms
both vanilla TSDF fusion as well as TV-L1 fusion on the task of volumetric
fusion. Further, we demonstrate state-of-the-art 3D shape completion results.Comment: 3DV 2017, https://github.com/griegler/octnetfusio
- …