75,207 research outputs found
Data generator for evaluating ETL process quality
Obtaining the right set of data for evaluating the fulfillment of different quality factors in the extract-transform-load (ETL) process design is rather challenging. First, the real data might be out of reach due to different privacy constraints, while manually providing a synthetic set of data is known as a labor-intensive task that needs to take various combinations of process parameters into account. More importantly, having a single dataset usually does not represent the evolution of data throughout the complete process lifespan, hence missing the plethora of possible test cases. To facilitate such demanding task, in this paper we propose an automatic data generator (i.e., Bijoux). Starting from a given ETL process model, Bijoux extracts the semantics of data transformations, analyzes the constraints they imply over input data, and automatically generates testing datasets. Bijoux is highly modular and configurable to enable end-users to generate datasets for a variety of interesting test scenarios (e.g., evaluating specific parts of an input ETL process design, with different input dataset sizes, different distributions of data, and different operation selectivities). We have developed a running prototype that implements the functionality of our data generation framework and here we report our experimental findings showing the effectiveness and scalability of our approach.Peer ReviewedPostprint (author's final draft
A Novel Framework for Online Amnesic Trajectory Compression in Resource-constrained Environments
State-of-the-art trajectory compression methods usually involve high
space-time complexity or yield unsatisfactory compression rates, leading to
rapid exhaustion of memory, computation, storage and energy resources. Their
ability is commonly limited when operating in a resource-constrained
environment especially when the data volume (even when compressed) far exceeds
the storage limit. Hence we propose a novel online framework for error-bounded
trajectory compression and ageing called the Amnesic Bounded Quadrant System
(ABQS), whose core is the Bounded Quadrant System (BQS) algorithm family that
includes a normal version (BQS), Fast version (FBQS), and a Progressive version
(PBQS). ABQS intelligently manages a given storage and compresses the
trajectories with different error tolerances subject to their ages. In the
experiments, we conduct comprehensive evaluations for the BQS algorithm family
and the ABQS framework. Using empirical GPS traces from flying foxes and cars,
and synthetic data from simulation, we demonstrate the effectiveness of the
standalone BQS algorithms in significantly reducing the time and space
complexity of trajectory compression, while greatly improving the compression
rates of the state-of-the-art algorithms (up to 45%). We also show that the
operational time of the target resource-constrained hardware platform can be
prolonged by up to 41%. We then verify that with ABQS, given data volumes that
are far greater than storage space, ABQS is able to achieve 15 to 400 times
smaller errors than the baselines. We also show that the algorithm is robust to
extreme trajectory shapes.Comment: arXiv admin note: substantial text overlap with arXiv:1412.032
Cache policies for cloud-based systems: To keep or not to keep
In this paper, we study cache policies for cloud-based caching. Cloud-based
caching uses cloud storage services such as Amazon S3 as a cache for data items
that would have been recomputed otherwise. Cloud-based caching departs from
classical caching: cloud resources are potentially infinite and only paid when
used, while classical caching relies on a fixed storage capacity and its main
monetary cost comes from the initial investment. To deal with this new context,
we design and evaluate a new caching policy that minimizes the overall cost of
a cloud-based system. The policy takes into account the frequency of
consumption of an item and the cloud cost model. We show that this policy is
easier to operate, that it scales with the demand and that it outperforms
classical policies managing a fixed capacity.Comment: Proceedings of IEEE International Conference on Cloud Computing 2014
(CLOUD 14
Timely Data Delivery in a Realistic Bus Network
Abstract—WiFi-enabled buses and stops may form the backbone of a metropolitan delay tolerant network, that exploits nearby communications, temporary storage at stops, and predictable bus mobility to deliver non-real time information. This paper studies the problem of how to route data from its source to its destination in order to maximize the delivery probability by a given deadline. We assume to know the bus schedule, but we take into account that randomness, due to road traffic conditions or passengers boarding and alighting, affects bus mobility. We propose a simple stochastic model for bus arrivals at stops, supported by a study of real-life traces collected in a large urban network. A succinct graph representation of this model allows us to devise an optimal (under our model) single-copy routing algorithm and then extend it to cases where several copies of the same data are permitted. Through an extensive simulation study, we compare the optimal routing algorithm with three other approaches: minimizing the expected traversal time over our graph, minimizing the number of hops a packet can travel, and a recently-proposed heuristic based on bus frequencies. Our optimal algorithm outperforms all of them, but most of the times it essentially reduces to minimizing the expected traversal time. For values of deadlines close to the expected delivery time, the multi-copy extension requires only 10 copies to reach almost the performance of the costly flooding approach. I
- …