41,367 research outputs found
Deep Hierarchical Parsing for Semantic Segmentation
This paper proposes a learning-based approach to scene parsing inspired by
the deep Recursive Context Propagation Network (RCPN). RCPN is a deep
feed-forward neural network that utilizes the contextual information from the
entire image, through bottom-up followed by top-down context propagation via
random binary parse trees. This improves the feature representation of every
super-pixel in the image for better classification into semantic categories. We
analyze RCPN and propose two novel contributions to further improve the model.
We first analyze the learning of RCPN parameters and discover the presence of
bypass error paths in the computation graph of RCPN that can hinder contextual
propagation. We propose to tackle this problem by including the classification
loss of the internal nodes of the random parse trees in the original RCPN loss
function. Secondly, we use an MRF on the parse tree nodes to model the
hierarchical dependency present in the output. Both modifications provide
performance boosts over the original RCPN and the new system achieves
state-of-the-art performance on Stanford Background, SIFT-Flow and Daimler
urban datasets.Comment: IEEE CVPR 201
ESPOON: Enforcing Security Policies In Outsourced Environments
Data outsourcing is a growing business model offering services to individuals
and enterprises for processing and storing a huge amount of data. It is not
only economical but also promises higher availability, scalability, and more
effective quality of service than in-house solutions. Despite all its benefits,
data outsourcing raises serious security concerns for preserving data
confidentiality. There are solutions for preserving confidentiality of data
while supporting search on the data stored in outsourced environments. However,
such solutions do not support access policies to regulate access to a
particular subset of the stored data.
For complex user management, large enterprises employ Role-Based Access
Controls (RBAC) models for making access decisions based on the role in which a
user is active in. However, RBAC models cannot be deployed in outsourced
environments as they rely on trusted infrastructure in order to regulate access
to the data. The deployment of RBAC models may reveal private information about
sensitive data they aim to protect. In this paper, we aim at filling this gap
by proposing \textbf{} for enforcing RBAC policies in
outsourced environments. enforces RBAC policies in an
encrypted manner where a curious service provider may learn a very limited
information about RBAC policies. We have implemented
and provided its performance evaluation showing a limited overhead, thus
confirming viability of our approach.Comment: The final version of this paper has been accepted for publication in
Elsevier Computers & Security 2013. arXiv admin note: text overlap with
arXiv:1306.482
On the Effect of Semantically Enriched Context Models on Software Modularization
Many of the existing approaches for program comprehension rely on the
linguistic information found in source code, such as identifier names and
comments. Semantic clustering is one such technique for modularization of the
system that relies on the informal semantics of the program, encoded in the
vocabulary used in the source code. Treating the source code as a collection of
tokens loses the semantic information embedded within the identifiers. We try
to overcome this problem by introducing context models for source code
identifiers to obtain a semantic kernel, which can be used for both deriving
the topics that run through the system as well as their clustering. In the
first model, we abstract an identifier to its type representation and build on
this notion of context to construct contextual vector representation of the
source code. The second notion of context is defined based on the flow of data
between identifiers to represent a module as a dependency graph where the nodes
correspond to identifiers and the edges represent the data dependencies between
pairs of identifiers. We have applied our approach to 10 medium-sized open
source Java projects, and show that by introducing contexts for identifiers,
the quality of the modularization of the software systems is improved. Both of
the context models give results that are superior to the plain vector
representation of documents. In some cases, the authoritativeness of
decompositions is improved by 67%. Furthermore, a more detailed evaluation of
our approach on JEdit, an open source editor, demonstrates that inferred topics
through performing topic analysis on the contextual representations are more
meaningful compared to the plain representation of the documents. The proposed
approach in introducing a context model for source code identifiers paves the
way for building tools that support developers in program comprehension tasks
such as application and domain concept location, software modularization and
topic analysis
Extending local features with contextual information in graph kernels
Graph kernels are usually defined in terms of simpler kernels over local
substructures of the original graphs. Different kernels consider different
types of substructures. However, in some cases they have similar predictive
performances, probably because the substructures can be interpreted as
approximations of the subgraphs they induce. In this paper, we propose to
associate to each feature a piece of information about the context in which the
feature appears in the graph. A substructure appearing in two different graphs
will match only if it appears with the same context in both graphs. We propose
a kernel based on this idea that considers trees as substructures, and where
the contexts are features too. The kernel is inspired from the framework in
[6], even if it is not part of it. We give an efficient algorithm for computing
the kernel and show promising results on real-world graph classification
datasets.Comment: To appear in ICONIP 201
- …