79,742 research outputs found

    Secure and Flexible Global File Sharing

    Get PDF
    Sharing of files is a major application of computer networks, with examples ranging from LAN-based network file systems to wide-area applications such as use of version control systems in distributed software development. Identification, authentication and access control are much more challenging in this complex large-scale distributed environment. In this paper, we introduce the Distributed Credential Filesystem (DisCFS). Under DisCFS, credentials are used to identify both the files stored in the file system and the users that are permitted to access them, as well as the circumstances under which such access is allowed. As with traditional capabilities, users can delegate access rights (and thus share information) simply by issuing new credentials. Credentials allow files to be accessed by remote users that are not known a priori to the server. Our design achieves an elegant separation of policy and mechanism which is mirrored in the implementation. Our prototype implementation of DisCFS runs under OpenBSD 2.8, using a modified user-level NFS server. Our measurements suggest that flexible and secure file sharing can be made scalable at a surprisingly low performance cost

    Dynamic Spectrum Sharing for Load Balancing in Multi-Cell Mobile Edge Computing

    Full text link
    Large-scale mobile edge computing (MEC) systems require scalable solutions to allocate communication and computing resources to the users. In this letter we address this challenge by applying dynamic spectrum sharing among the base stations (BSs), together with local resource allocation in the cells. We show that the network-wide resource allocation can be transformed into a convex optimization problem, and propose a distributed, hierarchical solution with limited information exchange among the BSs. Numerical results demonstrate that the proposed solution is superior to other baseline algorithms, when wireless and computing resource allocation is not jointly optimized, or the wireless resources allocated to the BSs are fixed.Comment: IEEE WC

    A Semantic-Based Middleware for Multimedia Collaborative Applications

    Get PDF
    The Internet growth and the performance increase of desktop computers have enabled large-scale distributed multimedia applications. They are expected to grow in demand and services and their traffic volume will dominate. Real-time delivery, scalability, heterogeneity are some requirements of these applications that have motivated a revision of the traditional Internet services, the operating systems structures, and the software systems for supporting application development. This work proposes a Java-based lightweight middleware for the development of large-scale multimedia applications. The middleware offers four services for multimedia applications. First, it provides two scalable lightweight protocols for floor control. One follows a centralized model that easily integrates with centralized resources such as a shared too], and the other is a distributed protocol targeted to distributed resources such as audio. Scalability is achieved by periodically multicasting a heartbeat that conveys state information used by clients to request the resource via temporary TCP connections. Second, it supports intra- and inter-stream synchronization algorithms and policies. We introduce the concept of virtual observer, which perceives the session as being in the same room with a sender. We avoid the need for globally synchronized clocks by introducing the concept of user\u27s multimedia presence, which defines a new manner for combining streams coming from multiple sites. It includes a novel algorithm for estimation and removal of clock skew. In addition, it supports event-driven asynchronous message reception, quality of service measures, and traffic rate control. Finally, the middleware provides support for data sharing via a resilient and scalable protocol for transmission of images that can dynamically change in content and size. The effectiveness of the middleware components is shown with the implementation of Odust, a prototypical sharing tool application built on top of the middleware

    Large-scale Wireless Local-area Network Measurement and Privacy Analysis

    Get PDF
    The edge of the Internet is increasingly becoming wireless. Understanding the wireless edge is therefore important for understanding the performance and security aspects of the Internet experience. This need is especially necessary for enterprise-wide wireless local-area networks (WLANs) as organizations increasingly depend on WLANs for mission- critical tasks. To study a live production WLAN, especially a large-scale network, is a difficult undertaking. Two fundamental difficulties involved are (1) building a scalable network measurement infrastructure to collect traces from a large-scale production WLAN, and (2) preserving user privacy while sharing these collected traces to the network research community. In this dissertation, we present our experience in designing and implementing one of the largest distributed WLAN measurement systems in the United States, the Dartmouth Internet Security Testbed (DIST), with a particular focus on our solutions to the challenges of efficiency, scalability, and security. We also present an extensive evaluation of the DIST system. To understand the severity of some potential trace-sharing risks for an enterprise-wide large-scale wireless network, we conduct privacy analysis on one kind of wireless network traces, a user-association log, collected from a large-scale WLAN. We introduce a machine-learning based approach that can extract and quantify sensitive information from a user-association log, even though it is sanitized. Finally, we present a case study that evaluates the tradeoff between utility and privacy on WLAN trace sanitization

    Scaling up publish/subscribe overlays using interest correlation for link sharing

    Get PDF
    Topic-based publish/subscribe is at the core of many distributed systems, ranging from application integration middleware to news dissemination. Therefore, much research was dedicated to publish/subscribe architectures and protocols, and in particular to the design of overlay networks for decentralized topic-based routing and efficient message dissemination. Nonetheless, existing systems fail to take full advantage of shared interests when disseminating information, hence suffering from high maintenance and traffic costs, or construct overlays that cope poorly with the scale and dynamism of large networks. In this paper we present StaN, a decentralized protocol that optimizes the properties of gossip-based overlay networks for topicbased publish/subscribe by sharing a large number of physical connections without disrupting its logical properties. StaN relies only on local knowledge and operates by leveraging common interests among participants to improve global resource usage and promote topic and event scalability. The experimental evaluation under two real workloads, both via a real deployment and through simulation shows that StaN provides an attractive infrastructure for scalable topic-based publish/subscribe

    Scale Difficulty And Incompetent Operation In Unlock Net

    Get PDF
    New system architecture to manage micro-RDF partitions on a large scale. New data placement strategies for locating relevant semantic data fragments. In this paper, we describe RpCl, a fully qualified and scalable distributed RDF data management system for that cloud. Unlike previous methods, RpCl administers a physiological analysis of case and plan information before the information is segmented. The machine maintains a sliding window while keeping track of the current good reputation of the workload, plus relevant statistics on the number of joints to be made, as well as the due margins. The machine combines pre-cutting by summarizing the RDF graph with a surface-based horizontal division from triads into a grid as an indexed index structure. POI is a dynamic index in RpCl that uses a lexical tree to analyze each URI or literal entered and assign it a unique key value. Sharing such data using classical techniques or segmenting a graph using traditional min reduction algorithms results in very inefficient distributions as well as a greater number of connections. Many RDF systems are based on hash segmentation, as well as distributed selections, projections, and joins. Grid-Vine was one of the first systems to manage this poor, large-scale decentralized administration. In this paper, we describe the RpCl architecture and its metadata structures along with the new algorithms we use to segment and distribute data. We produce an overview of RpCl which shows that our product is often two orders of magnitude faster than high-end systems at standard workloads

    RELEASE: A High-level Paradigm for Reliable Large-scale Server Software

    Get PDF
    Erlang is a functional language with a much-emulated model for building reliable distributed systems. This paper outlines the RELEASE project, and describes the progress in the first six months. The project aim is to scale the Erlang’s radical concurrency-oriented programming paradigm to build reliable general-purpose software, such as server-based systems, on massively parallel machines. Currently Erlang has inherently scalable computation and reliability models, but in practice scalability is constrained by aspects of the language and virtual machine. We are working at three levels to address these challenges: evolving the Erlang virtual machine so that it can work effectively on large scale multicore systems; evolving the language to Scalable Distributed (SD) Erlang; developing a scalable Erlang infrastructure to integrate multiple, heterogeneous clusters. We are also developing state of the art tools that allow programmers to understand the behaviour of massively parallel SD Erlang programs. We will demonstrate the effectiveness of the RELEASE approach using demonstrators and two large case studies on a Blue Gene

    Tensor Learning for Recovering Missing Information: Algorithms and Applications on Social Media

    Get PDF
    Real-time social systems like Facebook, Twitter, and Snapchat have been growing rapidly, producing exabytes of data in different views or aspects. Coupled with more and more GPS-enabled sharing of videos, images, blogs, and tweets that provide valuable information regarding “who”, “where”, “when” and “what”, these real-time human sensor data promise new research opportunities to uncover models of user behavior, mobility, and information sharing. These real-time dynamics in social systems usually come in multiple aspects, which are able to help better understand the social interactions of the underlying network. However, these multi-aspect datasets are often raw and incomplete owing to various unpredictable or unavoidable reasons; for instance, API limitations and data sampling policies can lead to an incomplete (and often biased) perspective on these multi-aspect datasets. This missing data could raise serious concerns such as biased estimations on structural properties of the network and properties of information cascades in social networks. In order to recover missing values or information in social systems, we identify “4S” challenges: extreme sparsity of the observed multi-aspect datasets, adoption of rich side information that is able to describe the similarities of entities, generation of robust models rather than limiting them on specific applications, and scalability of models to handle real large-scale datasets (billions of observed entries). With these challenges in mind, this dissertation aims to develop scalable and interpretable tensor-based frameworks, algorithms and methods for recovering missing information on social media. In particular, this dissertation research makes four unique contributions: _ The first research contribution of this dissertation research is to propose a scalable framework based on low-rank tensor learning in the presence of incomplete information. Concretely, we formally define the problem of recovering the spatio-temporal dynamics of online memes and tackle this problem by proposing a novel tensor-based factorization approach based on the alternative direction method of multipliers (ADMM) with the integration of the latent relationships derived from contextual information among locations, memes, and times. _ The second research contribution of this dissertation research is to evaluate the generalization of the proposed tensor learning framework and extend it to the recommendation problem. In particular, we develop a novel tensor-based approach to solve the personalized expert recommendation by integrating both the latent relationships between homogeneous entities (e.g., users and users, experts and experts) and the relationships between heterogeneous entities (e.g., users and experts, topics and experts) from the geo-spatial, topical, and social contexts. _ The third research contribution of this dissertation research is to extend the proposed tensor learning framework to the user topical profiling problem. Specifically, we propose a tensor-based contextual regularization model embedded into a matrix factorization framework, which leverages the social, textual, and behavioral contexts across users, in order to overcome identified challenges. _ The fourth research contribution of this dissertation research is to scale up the proposed tensor learning framework to be capable of handling real large-scale datasets that are too big to fit in the main memory of a single machine. Particularly, we propose a novel distributed tensor completion algorithm with the trace-based regularization of the auxiliary information based on ADMM under the proposed tensor learning framework, which is designed to scale up to real large-scale tensors (e.g., billions of entries) by efficiently computing auxiliary variables, minimizing intermediate data, and reducing the workload of updating new tensors
    • …
    corecore