17,113 research outputs found
Distribution pattern-driven development of service architectures
Distributed systems are being constructed by composing a number of discrete components. This practice is particularly prevalent within the Web service domain in the form of service process orchestration and choreography. Often, enterprise systems are built from many existing discrete applications such as legacy applications exposed using Web service interfaces. There are a number of architectural configurations or distribution patterns, which express how a composed system is to be deployed in a distributed environment. However, the amount of code
required to realise these distribution patterns is considerable. In this paper, we propose a distribution
pattern-driven approach to service composition and architecting. We develop, based on a catalog of patterns, a UML-compliant framework, which takes existing Web service interfaces as its input and generates executable Web service compositions based on a distribution pattern chosen by the software architect
Pixelated Semantic Colorization
While many image colorization algorithms have recently shown the capability
of producing plausible color versions from gray-scale photographs, they still
suffer from limited semantic understanding. To address this shortcoming, we
propose to exploit pixelated object semantics to guide image colorization. The
rationale is that human beings perceive and distinguish colors based on the
semantic categories of objects. Starting from an autoregressive model, we
generate image color distributions, from which diverse colored results are
sampled. We propose two ways to incorporate object semantics into the
colorization model: through a pixelated semantic embedding and a pixelated
semantic generator. Specifically, the proposed convolutional neural network
includes two branches. One branch learns what the object is, while the other
branch learns the object colors. The network jointly optimizes a color
embedding loss, a semantic segmentation loss and a color generation loss, in an
end-to-end fashion. Experiments on PASCAL VOC2012 and COCO-stuff reveal that
our network, when trained with semantic segmentation labels, produces more
realistic and finer results compared to the colorization state-of-the-art
Verification and Validation of Semantic Annotations
In this paper, we propose a framework to perform verification and validation
of semantically annotated data. The annotations, extracted from websites, are
verified against the schema.org vocabulary and Domain Specifications to ensure
the syntactic correctness and completeness of the annotations. The Domain
Specifications allow checking the compliance of annotations against
corresponding domain-specific constraints. The validation mechanism will detect
errors and inconsistencies between the content of the analyzed schema.org
annotations and the content of the web pages where the annotations were found.Comment: Accepted for the A.P. Ershov Informatics Conference 2019(the PSI
Conference Series, 12th edition) proceedin
Depth Estimation via Affinity Learned with Convolutional Spatial Propagation Network
Depth estimation from a single image is a fundamental problem in computer
vision. In this paper, we propose a simple yet effective convolutional spatial
propagation network (CSPN) to learn the affinity matrix for depth prediction.
Specifically, we adopt an efficient linear propagation model, where the
propagation is performed with a manner of recurrent convolutional operation,
and the affinity among neighboring pixels is learned through a deep
convolutional neural network (CNN). We apply the designed CSPN to two depth
estimation tasks given a single image: (1) To refine the depth output from
state-of-the-art (SOTA) existing methods; and (2) to convert sparse depth
samples to a dense depth map by embedding the depth samples within the
propagation procedure. The second task is inspired by the availability of
LIDARs that provides sparse but accurate depth measurements. We experimented
the proposed CSPN over two popular benchmarks for depth estimation, i.e. NYU v2
and KITTI, where we show that our proposed approach improves in not only
quality (e.g., 30% more reduction in depth error), but also speed (e.g., 2 to 5
times faster) than prior SOTA methods.Comment: 14 pages, 8 figures, ECCV 201
Generating Visual Representations for Zero-Shot Classification
This paper addresses the task of learning an image clas-sifier when some
categories are defined by semantic descriptions only (e.g. visual attributes)
while the others are defined by exemplar images as well. This task is often
referred to as the Zero-Shot classification task (ZSC). Most of the previous
methods rely on learning a common embedding space allowing to compare visual
features of unknown categories with semantic descriptions. This paper argues
that these approaches are limited as i) efficient discrimi-native classifiers
can't be used ii) classification tasks with seen and unseen categories
(Generalized Zero-Shot Classification or GZSC) can't be addressed efficiently.
In contrast , this paper suggests to address ZSC and GZSC by i) learning a
conditional generator using seen classes ii) generate artificial training
examples for the categories without exemplars. ZSC is then turned into a
standard supervised learning problem. Experiments with 4 generative models and
5 datasets experimentally validate the approach, giving state-of-the-art
results on both ZSC and GZSC
- …