3,351 research outputs found
Multi-Layer Monitoring at the Edge for Vehicular Video Streaming: Field Trials
In an increasingly connected world, wireless networks' monitoring and
characterization are of vital importance. Service and application providers
need to have a detailed understanding of network performance to offer new
solutions tailored to the needs of today's society. In the context of mobility,
in-vehicle infotainment services are expected to stand out among other popular
connected vehicle services, so it is essential that communication networks are
able to satisfy the Quality of Service (QoS) and Quality of Experience (QoE)
requirements needed for these type of services. This paper investigates a
multi-layer network performance monitoring architecture at the edge providing
QoS, QoE, and localization information for vehicular video streaming
applications in real-time over 5G networks. In order to conduct field trials
and show test results, Mobile Network Operators (MNOs)' 5G Standalone (SA)
network and Multi-access Edge Computing (MEC) infrastructure are used to
provide connectivity and edge computing resources to a vehicle equipped with a
5G modem
Evaluation of Hadoop/Mapreduce Framework Migration Tools
In distributed systems, database migration is not an easy task. Companies will encounter challenges moving data including legacy data to the big data platform. This paper reviews some tools for migrating from traditional databases to the big data platform and thus suggests a model, based on the review
Identity‐based Schemes for a Secured Big Data and Cloud ICT Framework in Smart Grid System
Smart grid is an intelligent cyber physical system (CPS). The CPS generates a massive amount of data for efficient grid operation. In this paper, a big data‐driven, cloud‐based information and communication technology (ICT) framework for smart grid CPS is proposed. The proposed ICT framework deploys hybrid cloud servers to enhance scalability and reliability of smart grid communication infrastructure. Because the data in the ICT framework contains much privacy of customers and important data for automated controlling, the security of data transmission must be ensured. In order to secure the communications over the Internet in the system, identity‐based schemes are proposed especially because of their advantage in key management. Specifically, an identity‐based signcryption (IBSC) scheme is proposed to provide confidentiality, non‐repudiation, and data integrity. For practical purposes, an identity‐based signature scheme is relaxed from the proposed IBSC to provide non‐repudiation only. Moreover, identity‐based schemes are also proposed to achieve signature delegation within the ICT framework. Security of the proposed IBSC scheme is rigorously analyzed in this work. Efficiency of the proposed IBSC scheme is demonstrated with an implementation using modified Weil pairing over an elliptic curve
git2net - Mining Time-Stamped Co-Editing Networks from Large git Repositories
Data from software repositories have become an important foundation for the
empirical study of software engineering processes. A recurring theme in the
repository mining literature is the inference of developer networks capturing
e.g. collaboration, coordination, or communication from the commit history of
projects. Most of the studied networks are based on the co-authorship of
software artefacts defined at the level of files, modules, or packages. While
this approach has led to insights into the social aspects of software
development, it neglects detailed information on code changes and code
ownership, e.g. which exact lines of code have been authored by which
developers, that is contained in the commit log of software projects.
Addressing this issue, we introduce git2net, a scalable python software that
facilitates the extraction of fine-grained co-editing networks in large git
repositories. It uses text mining techniques to analyse the detailed history of
textual modifications within files. This information allows us to construct
directed, weighted, and time-stamped networks, where a link signifies that one
developer has edited a block of source code originally written by another
developer. Our tool is applied in case studies of an Open Source and a
commercial software project. We argue that it opens up a massive new source of
high-resolution data on human collaboration patterns.Comment: MSR 2019, 12 pages, 10 figure
GraphSE: An Encrypted Graph Database for Privacy-Preserving Social Search
In this paper, we propose GraphSE, an encrypted graph database for online
social network services to address massive data breaches. GraphSE preserves
the functionality of social search, a key enabler for quality social network
services, where social search queries are conducted on a large-scale social
graph and meanwhile perform set and computational operations on user-generated
contents. To enable efficient privacy-preserving social search, GraphSE
provides an encrypted structural data model to facilitate parallel and
encrypted graph data access. It is also designed to decompose complex social
search queries into atomic operations and realise them via interchangeable
protocols in a fast and scalable manner. We build GraphSE with various
queries supported in the Facebook graph search engine and implement a
full-fledged prototype. Extensive evaluations on Azure Cloud demonstrate that
GraphSE is practical for querying a social graph with a million of users.Comment: This is the full version of our AsiaCCS paper "GraphSE: An
Encrypted Graph Database for Privacy-Preserving Social Search". It includes
the security proof of the proposed scheme. If you want to cite our work,
please cite the conference version of i
Protection of big data privacy
In recent years, big data have become a hot research topic. The increasing amount of big data also increases the chance of breaching the privacy of individuals. Since big data require high computational power and large storage, distributed systems are used. As multiple parties are involved in these systems, the risk of privacy violation is increased. There have been a number of privacy-preserving mechanisms developed for privacy protection at different stages (e.g., data generation, data storage, and data processing) of a big data life cycle. The goal of this paper is to provide a comprehensive overview of the privacy preservation mechanisms in big data and present the challenges for existing mechanisms. In particular, in this paper, we illustrate the infrastructure of big data and the state-of-the-art privacy-preserving mechanisms in each stage of the big data life cycle. Furthermore, we discuss the challenges and future research directions related to privacy preservation in big data
Sandboxed, Online Debugging of Production Bugs for SOA Systems
Short time-to-bug localization is extremely important for any 24x7 service-oriented application. To this end, we introduce a new debugging paradigm called live debugging. There are two goals that any live debugging infrastructure must meet: Firstly, it must offer real-time insight for bug diagnosis and localization, which is paramount when errors happen in user-facing applications. Secondly, live debugging should not impact user-facing performance for normal events. In large distributed applications, bugs which impact only a small percentage of users are common. In such scenarios, debugging a small part of the application should not impact the entire system.
With the above-stated goals in mind, this thesis presents a framework called Parikshan, which leverages user-space containers (OpenVZ) to launch application instances for the express purpose of live debugging. Parikshan is driven by a live-cloning process, which generates a replica (called debug container) of production services, cloned from a production container which continues to provide the real output to the user. The debug container provides a sandbox environment, for safe execution of monitoring/debugging done by the users without any perturbation to the execution environment. As a part of this framework, we have designed customized-network proxies, which replicate inputs from clients to both the production and test-container, as well safely discard all outputs. Together the network duplicator, and the debug container ensure both compute and network isolation of the debugging environment. We believe that this piece of work provides the first of its kind practical real-time debugging of large multi-tier and cloud applications, without requiring any application downtime, and minimal performance impact
- …