25,616 research outputs found
Aspects of Assembly and Cascaded Aspects of Assembly: Logical and Temporal Properties
Highly dynamic computing environments, like ubiquitous and pervasive
computing environments, require frequent adaptation of applications. This has
to be done in a timely fashion, and the adaptation process must be as fast as
possible and mastered. Moreover the adaptation process has to ensure a
consistent result when finished whereas adaptations to be implemented cannot be
anticipated at design time. In this paper we present our mechanism for
self-adaptation based on the aspect oriented programming paradigm called Aspect
of Assembly (AAs). Using AAs: (1) the adaptations process is fast and its
duration is mastered; (2) adaptations' entities are independent of each other
thanks to the weaver logical merging mechanism; and (3) the high variability of
the software infrastructure can be managed using a mono or multi-cycle weaving
approach.Comment: 14 pages, published in International Journal of Computer Science,
Volume 8, issue 4, Jul 2011, ISSN 1694-081
Cellular-Broadcast Service Convergence through Caching for CoMP Cloud RANs
Cellular and Broadcast services have been traditionally treated independently
due to the different market requirements, thus resulting in different business
models and orthogonal frequency allocations. However, with the advent of cheap
memory and smart caching, this traditional paradigm can converge into a single
system which can provide both services in an efficient manner. This paper
focuses on multimedia delivery through an integrated network, including both a
cellular (also known as unicast or broadband) and a broadcast last mile
operating over shared spectrum. The subscribers of the network are equipped
with a cache which can effectively create zero perceived latency for multimedia
delivery, assuming that the content has been proactively and intelligently
cached. The main objective of this work is to establish analytically the
optimal content popularity threshold, based on a intuitive cost function. In
other words, the aim is to derive which content should be broadcasted and which
content should be unicasted. To facilitate this, Cooperative Multi- Point
(CoMP) joint processing algorithms are employed for the uni and broad-cast PHY
transmissions. To practically implement this, the integrated network controller
is assumed to have access to traffic statistics in terms of content popularity.
Simulation results are provided to assess the gain in terms of total spectral
efficiency. A conventional system, where the two networks operate
independently, is used as benchmark.Comment: Submitted to IEEE PIMRC 201
Visual analysis of sensor logs in smart spaces: Activities vs. situations
Models of human habits in smart spaces can be expressed by using a multitude of representations whose readability influences the possibility of being validated by human experts. Our research is focused on developing a visual analysis pipeline (service) that allows, starting from the sensor log of a smart space, to graphically visualize human habits. The basic assumption is to apply techniques borrowed from the area of business process automation and mining on a version of the sensor log preprocessed in order to translate raw sensor measurements into human actions. The proposed pipeline is employed to automatically extract models to be reused for ambient intelligence. In this paper, we present an user evaluation aimed at demonstrating the effectiveness of the approach, by comparing it wrt. a relevant state-of-the-art visual tool, namely SITUVIS
Time Distortion Anonymization for the Publication of Mobility Data with High Utility
An increasing amount of mobility data is being collected every day by
different means, such as mobile applications or crowd-sensing campaigns. This
data is sometimes published after the application of simple anonymization
techniques (e.g., putting an identifier instead of the users' names), which
might lead to severe threats to the privacy of the participating users.
Literature contains more sophisticated anonymization techniques, often based on
adding noise to the spatial data. However, these techniques either compromise
the privacy if the added noise is too little or the utility of the data if the
added noise is too strong. We investigate in this paper an alternative
solution, which builds on time distortion instead of spatial distortion.
Specifically, our contribution lies in (1) the introduction of the concept of
time distortion to anonymize mobility datasets (2) Promesse, a protection
mechanism implementing this concept (3) a practical study of Promesse compared
to two representative spatial distortion mechanisms, namely Wait For Me, which
enforces k-anonymity, and Geo-Indistinguishability, which enforces differential
privacy. We evaluate our mechanism practically using three real-life datasets.
Our results show that time distortion reduces the number of points of interest
that can be retrieved by an adversary to under 3 %, while the introduced
spatial error is almost null and the distortion introduced on the results of
range queries is kept under 13 % on average.Comment: in 14th IEEE International Conference on Trust, Security and Privacy
in Computing and Communications, Aug 2015, Helsinki, Finlan
Scheduling of data-intensive workloads in a brokered virtualized environment
Providing performance predictability guarantees is increasingly important in cloud platforms, especially for data-intensive applications, for which performance depends greatly on the available rates of data transfer between the various computing/storage hosts underlying the virtualized resources assigned to the application. With the increased prevalence of brokerage services in cloud platforms, there is a need for resource management solutions that consider the brokered nature of these workloads, as well as the special demands of their intra-dependent components. In this paper, we present an offline mechanism for scheduling batches of brokered data-intensive workloads, which can be extended to an online setting. The objective of the mechanism is to decide on a packing of the workloads in a batch that minimizes the broker's incurred costs, Moreover, considering the brokered nature of such workloads, we define a payment model that provides incentives to these workloads to be scheduled as part of a batch, which we analyze theoretically. Finally, we evaluate the proposed scheduling algorithm, and exemplify the fairness of the payment model in practical settings via trace-based experiments
IVOA Recommendation: Simple Spectral Access Protocol Version 1.1
The Simple Spectral Access (SSA) Protocol (SSAP) defines a uniform interface
to remotely discover and access one dimensional spectra. SSA is a member of an
integrated family of data access interfaces altogether comprising the Data
Access Layer (DAL) of the IVOA. SSA is based on a more general data model
capable of describing most tabular spectrophotometric data, including time
series and spectral energy distributions (SEDs) as well as 1-D spectra; however
the scope of the SSA interface as specified in this document is limited to
simple 1-D spectra, including simple aggregations of 1-D spectra. The form of
the SSA interface is simple: clients first query the global resource registry
to find services of interest and then issue a data discovery query to selected
services to determine what relevant data is available from each service; the
candidate datasets available are described uniformly in a VOTable format
document which is returned in response to the query. Finally, the client may
retrieve selected datasets for analysis. Spectrum datasets returned by an SSA
spectrum service may be either precomputed, archival datasets, or they may be
virtual data which is computed on the fly to respond to a client request.
Spectrum datasets may conform to a standard data model defined by SSA, or may
be native spectra with custom project-defined content. Spectra may be returned
in any of a number of standard data formats. Spectral data is generally stored
externally to the VO in a format specific to each spectral data collection;
currently there is no standard way to represent astronomical spectra, and
virtually every project does it differently. Hence spectra may be actively
mediated to the standard SSA-defined data model at access time by the service,
so that client analysis programs do not have to be familiar with the
idiosyncratic details of each data collection to be accessed
Spaceprint: a Mobility-based Fingerprinting Scheme for Public Spaces
In this paper, we address the problem of how automated situation-awareness
can be achieved by learning real-world situations from ubiquitously generated
mobility data. Without semantic input about the time and space where situations
take place, this turns out to be a fundamental challenging problem.
Uncertainties also introduce technical challenges when data is generated in
irregular time intervals, being mixed with noise, and errors. Purely relying on
temporal patterns observable in mobility data, in this paper, we propose
Spaceprint, a fully automated algorithm for finding the repetitive pattern of
similar situations in spaces. We evaluate this technique by showing how the
latent variables describing the category, and the actual identity of a space
can be discovered from the extracted situation patterns. Doing so, we use
different real-world mobility datasets with data about the presence of mobile
entities in a variety of spaces. We also evaluate the performance of this
technique by showing its robustness against uncertainties
- …