47,803 research outputs found
1-D Convolutional Graph Convolutional Networks for Fault Detection in Distributed Energy Systems
This paper presents a 1-D convolutional graph neural network for fault
detection in microgrids. The combination of 1-D convolutional neural networks
(1D-CNN) and graph convolutional networks (GCN) helps extract both
spatial-temporal correlations from the voltage measurements in microgrids. The
fault detection scheme includes fault event detection, fault type and phase
classification, and fault location. There are five neural network model
training to handle these tasks. Transfer learning and fine-tuning are applied
to reduce training efforts. The combined recurrent graph convolutional neural
networks (1D-CGCN) is compared with the traditional ANN structure on the
Potsdam 13-bus microgrid dataset. The achievable accuracy of 99.27%, 98.1%,
98.75%, and 95.6% for fault detection, fault type classification, fault phase
identification, and fault location respectively.Comment: arXiv admin note: text overlap with arXiv:2210.1517
COCO_TS Dataset: Pixel-level Annotations Based on Weak Supervision for Scene Text Segmentation
The absence of large scale datasets with pixel-level supervisions is a
significant obstacle for the training of deep convolutional networks for scene
text segmentation. For this reason, synthetic data generation is normally
employed to enlarge the training dataset. Nonetheless, synthetic data cannot
reproduce the complexity and variability of natural images. In this paper, a
weakly supervised learning approach is used to reduce the shift between
training on real and synthetic data. Pixel-level supervisions for a text
detection dataset (i.e. where only bounding-box annotations are available) are
generated. In particular, the COCO-Text-Segmentation (COCO_TS) dataset, which
provides pixel-level supervisions for the COCO-Text dataset, is created and
released. The generated annotations are used to train a deep convolutional
neural network for semantic segmentation. Experiments show that the proposed
dataset can be used instead of synthetic data, allowing us to use only a
fraction of the training samples and significantly improving the performances
WordFences: Text localization and recognition
En col·laboració amb la Universitat de Barcelona (UB) i la Universitat Rovira i Virgili (URV)In recent years, text recognition has achieved remarkable success in recognizing scanned
document text. However, word recognition in natural images is still an open problem,
which generally requires time consuming post-processing steps. We present a novel architecture
for individual word detection in scene images based on semantic segmentation.
Our contributions are twofold: the concept of WordFence, which detects border areas
surrounding each individual word and a unique pixelwise weighted softmax loss function
which penalizes background and emphasizes small text regions. WordFence ensures that
each word is detected individually, and the new loss function provides a strong training
signal to both text and word border localization. The proposed technique avoids intensive
post-processing by combining semantic word segmentation with a voting scheme
for merging segmentations of multiple scales, producing an end-to-end word detection
system. We achieve superior localization recall on common benchmark datasets - 92%
recall on ICDAR11 and ICDAR13 and 63% recall on SVT. Furthermore, end-to-end
word recognition achieves state-of-the-art 86% F-Score on ICDAR13
- …