5,665 research outputs found
Device-Centric Monitoring for Mobile Device Management
The ubiquity of computing devices has led to an increased need to ensure not
only that the applications deployed on them are correct with respect to their
specifications, but also that the devices are used in an appropriate manner,
especially in situations where the device is provided by a party other than the
actual user. Much work which has been done on runtime verification for mobile
devices and operating systems is mostly application-centric, resulting in
global, device-centric properties (e.g. the user may not send more than 100
messages per day across all applications) being difficult or impossible to
verify. In this paper we present a device-centric approach to runtime verify
the device behaviour against a device policy with the different applications
acting as independent components contributing to the overall behaviour of the
device. We also present an implementation for Android devices, and evaluate it
on a number of device-centric policies, reporting the empirical results
obtained.Comment: In Proceedings FESCA 2016, arXiv:1603.0837
Recommended from our members
A monitoring approach for runtime service discovery
Effective runtime service discovery requires identification of services based on different service characteristics such as structural, behavioural, quality, and contextual characteristics. However, current service registries guarantee services described in terms of structural and sometimes quality characteristics and, therefore, it is not always possible to assume that services in them will have all the characteristics required for effective service discovery. In this paper, we describe a monitor-based runtime service discovery framework called MoRSeD. The framework supports service discovery in both push and pull modes of query execution. The push mode of query execution is performed in parallel to the execution of a service-based system, in a proactive way. Both types of queries are specified in a query language called SerDiQueL that allows the representation of structural, behavioral, quality, and contextual conditions of services to be identified. The framework uses a monitor component to verify if behavioral and contextual conditions in the queries can be satisfied by services, based on translations of these conditions into properties represented in event calculus, and verification of the satisfiability of these properties against services. The monitor is also used to support identification that services participating in a service-based system are unavailable, and identification of changes in the behavioral and contextual characteristics of the services. A prototype implementation of the framework has been developed. The framework has been evaluated in terms of comparison of its performance when using and when not using the monitor component
Context modeling and constraints binding in web service business processes
Context awareness is a principle used in pervasive services
applications to enhance their exibility and adaptability to
changing conditions and dynamic environments. Ontologies
provide a suitable framework for context modeling and reasoning. We develop a context model for executable business processes { captured as an ontology for the web services domain. A web service description is attached to a service context profile, which is bound to the context ontology. Context instances can be generated dynamically at services runtime and are bound to context constraint services. Constraint services facilitate both setting up constraint properties and constraint checkers, which determine the dynamic validity of context instances. Data collectors focus on capturing context instances. Runtime integration of both constraint services and data collectors permit the business process to achieve dynamic business goals
A Declarative Framework for Specifying and Enforcing Purpose-aware Policies
Purpose is crucial for privacy protection as it makes users confident that
their personal data are processed as intended. Available proposals for the
specification and enforcement of purpose-aware policies are unsatisfactory for
their ambiguous semantics of purposes and/or lack of support to the run-time
enforcement of policies.
In this paper, we propose a declarative framework based on a first-order
temporal logic that allows us to give a precise semantics to purpose-aware
policies and to reuse algorithms for the design of a run-time monitor enforcing
purpose-aware policies. We also show the complexity of the generation and use
of the monitor which, to the best of our knowledge, is the first such a result
in literature on purpose-aware policies.Comment: Extended version of the paper accepted at the 11th International
Workshop on Security and Trust Management (STM 2015
CamFlow: Managed Data-sharing for Cloud Services
A model of cloud services is emerging whereby a few trusted providers manage
the underlying hardware and communications whereas many companies build on this
infrastructure to offer higher level, cloud-hosted PaaS services and/or SaaS
applications. From the start, strong isolation between cloud tenants was seen
to be of paramount importance, provided first by virtual machines (VM) and
later by containers, which share the operating system (OS) kernel. Increasingly
it is the case that applications also require facilities to effect isolation
and protection of data managed by those applications. They also require
flexible data sharing with other applications, often across the traditional
cloud-isolation boundaries; for example, when government provides many related
services for its citizens on a common platform. Similar considerations apply to
the end-users of applications. But in particular, the incorporation of cloud
services within `Internet of Things' architectures is driving the requirements
for both protection and cross-application data sharing.
These concerns relate to the management of data. Traditional access control
is application and principal/role specific, applied at policy enforcement
points, after which there is no subsequent control over where data flows; a
crucial issue once data has left its owner's control by cloud-hosted
applications and within cloud-services. Information Flow Control (IFC), in
addition, offers system-wide, end-to-end, flow control based on the properties
of the data. We discuss the potential of cloud-deployed IFC for enforcing
owners' dataflow policy with regard to protection and sharing, as well as
safeguarding against malicious or buggy software. In addition, the audit log
associated with IFC provides transparency, giving configurable system-wide
visibility over data flows. [...]Comment: 14 pages, 8 figure
PLACES'10: The 3rd Workshop on Programmng Language Approaches to concurrency and Communication-Centric Software
Paphos, Cyprus. March 201
A unified approach for static and runtime verification : framework and applications
Static verification of software is becoming ever more effective
and efficient. Still, static techniques either have high precision, in which
case powerful judgements are hard to achieve automatically, or they use
abstractions supporting increased automation, but possibly losing important aspects of the concrete system in the process. Runtime verification has complementary strengths and weaknesses. It combines full
precision of the model (including the real deployment environment) with
full automation, but cannot judge future and alternative runs. Another
drawback of runtime verification can be the computational overhead of
monitoring the running system which, although typically not very high,
can still be prohibitive in certain settings. In this paper we propose a
framework to combine static analysis techniques and runtime verification with the aim of getting the best of both techniques. In particular,
we discuss an instantiation of our framework for the deductive theorem
prover KeY, and the runtime verification tool Larva. Apart from combining static and dynamic verification, this approach also combines the
data centric analysis of KeY with the control centric analysis of Larva.
An advantage of the approach is that, through the use of a single specification which can be used by both analysis techniques, expensive parts
of the analysis could be moved to the static phase, allowing the runtime
monitor to make significant assumptions, dropping parts of expensive
checks at runtime. We also discuss specific applications of our approach.peer-reviewe
- …