12,766 research outputs found
Personalized Ranking for Context-Aware Venue Suggestion
Making personalized and context-aware suggestions of venues to the users is
very crucial in venue recommendation. These suggestions are often based on
matching the venues' features with the users' preferences, which can be
collected from previously visited locations. In this paper we present a novel
user-modeling approach which relies on a set of scoring functions for making
personalized suggestions of venues based on venues content and reviews as well
as users context. Our experiments, conducted on the dataset of the TREC
Contextual Suggestion Track, prove that our methodology outperforms
state-of-the-art approaches by a significant margin.Comment: The 32nd ACM SIGAPP Symposium On Applied Computing (SAC), Marrakech,
Morocco, April 4-6, 201
Improving Function Coverage with Munch: A Hybrid Fuzzing and Directed Symbolic Execution Approach
Fuzzing and symbolic execution are popular techniques for finding
vulnerabilities and generating test-cases for programs. Fuzzing, a blackbox
method that mutates seed input values, is generally incapable of generating
diverse inputs that exercise all paths in the program. Due to the
path-explosion problem and dependence on SMT solvers, symbolic execution may
also not achieve high path coverage. A hybrid technique involving fuzzing and
symbolic execution may achieve better function coverage than fuzzing or
symbolic execution alone. In this paper, we present Munch, an open source
framework implementing two hybrid techniques based on fuzzing and symbolic
execution. We empirically show using nine large open-source programs that
overall, Munch achieves higher (in-depth) function coverage than symbolic
execution or fuzzing alone. Using metrics based on total analyses time and
number of queries issued to the SMT solver, we also show that Munch is more
efficient at achieving better function coverage.Comment: To appear at 33rd ACM/SIGAPP Symposium On Applied Computing (SAC). To
be held from 9th to 13th April, 201
Towards a logic for performance and mobility
Klaim is an experimental language designed for modeling and programming distributed systems composed of mobile components where distribution awareness and dynamic system architecture configuration are key issues. StocKlaim [R. De Nicola, D. Latella, and M. Massink. Formal modeling and quantitative analysis of KLAIM-based mobile systems. In ACM Symposium on Applied Computing (SAC). ACM Press, 2005. Also available as Technical Report 2004-TR-25; CNR/ISTI, 2004] is a Markovian extension of the core subset of Klaim which includes process distribution, process mobility, asynchronous communication, and site creation. In this paper, MoSL, a temporal logic for StocKlaim is proposed which addresses and integrates the issues of distribution awareness and mobility and those concerning stochastic behaviour of systems. The satisfiability relation is formally defined over labelled Markov chains. A large fragment of the proposed logic can be translated to action-based CSL for which efficient model-checkers exist. This way, such model-checkers can be used for the verification of StocKlaim models against MoSL properties. An example application is provided in the present paper
A Real-Time Remote IDS Testbed for Connected Vehicles
Connected vehicles are becoming commonplace. A constant connection between
vehicles and a central server enables new features and services. This added
connectivity raises the likelihood of exposure to attackers and risks
unauthorized access. A possible countermeasure to this issue are intrusion
detection systems (IDS), which aim at detecting these intrusions during or
after their occurrence. The problem with IDS is the large variety of possible
approaches with no sensible option for comparing them. Our contribution to this
problem comprises the conceptualization and implementation of a testbed for an
automotive real-world scenario. That amounts to a server-side IDS detecting
intrusions into vehicles remotely. To verify the validity of our approach, we
evaluate the testbed from multiple perspectives, including its fitness for
purpose and the quality of the data it generates. Our evaluation shows that the
testbed makes the effective assessment of various IDS possible. It solves
multiple problems of existing approaches, including class imbalance.
Additionally, it enables reproducibility and generating data of varying
detection difficulties. This allows for comprehensive evaluation of real-time,
remote IDS.Comment: Peer-reviewed version accepted for publication in the proceedings of
the 34th ACM/SIGAPP Symposium On Applied Computing (SAC'19
Ubi-App: A Ubiquitous Application for Universal Access from Handheld Devices
Universal access from a handheld device (such as a PDA, cell phone) at any time or anywhere is now a reality. Ubicomp Assistant (UA) (Sharmin et al. in Proceedings of the 21st annual ACM symposium on applied computing (ACM SAC 2006), Dijon, France, pp 1013–1017, 2006) is an integral service of MARKS (Sharmin et al. in Proceedings of the third international conference on information technology: new generations (ITNG 2006), Las Vegas, Nevada, USA, pp 306–313, 2006). It is a middleware developed for handheld devices, and has been designed to accommodate different types of users (e.g., education, healthcare, marketing, or business). This customizable service employs the ubiquitous nature of current short range, low-power wireless connectivity and readily available, low-cost lightweight mobile devices. These devices can reach other neighbouring devices using a free short-range ad hoc network. To the best of the authors’ knowledge, the UA service is the only service designed for these devices. This paper presents the details of Ubi-App, a ubiquitous application for universal access from any handheld device, which uses UA as a service. The results of a usability test and performance evaluation of the prototype show that Ubi-App is useful, easy to use, easy to install, and does not degrade the performance of the device
Same but Different: Distant Supervision for Predicting and Understanding Entity Linking Difficulty
Entity Linking (EL) is the task of automatically identifying entity mentions
in a piece of text and resolving them to a corresponding entity in a reference
knowledge base like Wikipedia. There is a large number of EL tools available
for different types of documents and domains, yet EL remains a challenging task
where the lack of precision on particularly ambiguous mentions often spoils the
usefulness of automated disambiguation results in real applications. A priori
approximations of the difficulty to link a particular entity mention can
facilitate flagging of critical cases as part of semi-automated EL systems,
while detecting latent factors that affect the EL performance, like
corpus-specific features, can provide insights on how to improve a system based
on the special characteristics of the underlying corpus. In this paper, we
first introduce a consensus-based method to generate difficulty labels for
entity mentions on arbitrary corpora. The difficulty labels are then exploited
as training data for a supervised classification task able to predict the EL
difficulty of entity mentions using a variety of features. Experiments over a
corpus of news articles show that EL difficulty can be estimated with high
accuracy, revealing also latent features that affect EL performance. Finally,
evaluation results demonstrate the effectiveness of the proposed method to
inform semi-automated EL pipelines.Comment: Preprint of paper accepted for publication in the 34th ACM/SIGAPP
Symposium On Applied Computing (SAC 2019
How Many and What Types of SPARQL Queries can be Answered through Zero-Knowledge Link Traversal?
The current de-facto way to query the Web of Data is through the SPARQL
protocol, where a client sends queries to a server through a SPARQL endpoint.
Contrary to an HTTP server, providing and maintaining a robust and reliable
endpoint requires a significant effort that not all publishers are willing or
able to make. An alternative query evaluation method is through link traversal,
where a query is answered by dereferencing online web resources (URIs) at real
time. While several approaches for such a lookup-based query evaluation method
have been proposed, there exists no analysis of the types (patterns) of queries
that can be directly answered on the live Web, without accessing local or
remote endpoints and without a-priori knowledge of available data sources. In
this paper, we first provide a method for checking if a SPARQL query (to be
evaluated on a SPARQL endpoint) can be answered through zero-knowledge link
traversal (without accessing the endpoint), and analyse a large corpus of real
SPARQL query logs for finding the frequency and distribution of answerable and
non-answerable query patterns. Subsequently, we provide an algorithm for
transforming answerable queries to SPARQL-LD queries that bypass the endpoints.
We report experimental results about the efficiency of the transformed queries
and discuss the benefits and the limitations of this query evaluation method.Comment: Preprint of paper accepted for publication in the 34th ACM/SIGAPP
Symposium On Applied Computing (SAC 2019
- …