11 research outputs found

    Near-Data Prediction Based Speculative Optimization in a Distribution Environment

    Get PDF
    Hadoop is an open source from Apache with a distributed file system and MapReduce distributed computing framework. The current Apache 2.0 license agreement supports on-demand payment by consumers for cloud platform services, helping users leverage their respective different hardware to provides cloud services. In cloud-based environment, there is a need to balance the resource requirements of workloads, optimize load performance, and the cloud compute costs to manage. When the processing power of clustered machines varies widely, such as when hardware is aging or overloaded, Hadoop offers a speculative execution (SE) optimization strategy, by monitoring task progress in real time, in the starting identical backup tasks on different nodes when multiple tasks under a job are not running at the same speed, providing the first to go. The completed calculations maintain the overall progress of the job. At present, the SE strategy’s incorrect selection of backup nodes and resource constraints may result in poor Hadoop performance, and subsequent tasks cannot be completed execution and other problems. This paper proposes an SE optimization strategy based on near data prediction, which analyzes the prediction of real-time task execution information to predict the required running time, select backup nodes based on actual requirements and approximate data to make the SE strategy achieve the best performance. Experiments prove that in a heterogeneous Hadoop environment, the optimization strategy can effectively improve the effectiveness and accuracy of various tasks and enhance the performance of cloud computing. Platform performance can benefits consumers better than before

    Web Archive Services Framework for Tighter Integration Between the Past and Present Web

    Get PDF
    Web archives have contained the cultural history of the web for many years, but they still have a limited capability for access. Most of the web archiving research has focused on crawling and preservation activities, with little focus on the delivery methods. The current access methods are tightly coupled with web archive infrastructure, hard to replicate or integrate with other web archives, and do not cover all the users\u27 needs. In this dissertation, we focus on the access methods for archived web data to enable users, third-party developers, researchers, and others to gain knowledge from the web archives. We build ArcSys, a new service framework that extracts, preserves, and exposes APIs for the web archive corpus. The dissertation introduces a novel categorization technique to divide the archived corpus into four levels. For each level, we will propose suitable services and APIs that enable both users and third-party developers to build new interfaces. The first level is the content level that extracts the content from the archived web data. We develop ArcContent to expose the web archive content processed through various filters. The second level is the metadata level; we extract the metadata from the archived web data and make it available to users. We implement two services, ArcLink for temporal web graph and ArcThumb for optimizing the thumbnail creation in the web archives. The third level is the URI level that focuses on using the URI HTTP redirection status to enhance the user query. Finally, the highest level in the web archiving service framework pyramid is the archive level. In this level, we define the web archive by the characteristics of its corpus and building Web Archive Profiles. The profiles are used by the Memento Aggregator for query optimization

    Sub-Linear Privacy-Preserving Near-Neighbor Search

    Get PDF
    In Near-Neighbor Search (NNS), a client queries a database (held by a server) for the most similar data (near-neighbors) given a certain similarity metric. The Privacy-Preserving variant (PP-NNS) requires that neither server nor the client shall learn information about the other party’s data except what can be inferred from the outcome of NNS. The overwhelming growth in the size of current datasets and the lack of a truly secure server in the online world render the existing solutions impractical; either due to their high computational requirements or non-realistic assumptions which potentially compromise privacy. PP-NNS having query time sub-linear in the size of the database has been suggested as an open research direction by Li et al. (CCSW’15). In this paper, we provide the first such algorithm, called Privacy-Preserving Locality Sensitive Indexing (SLSI) which has a sub-linear query time and the ability to handle honest-but-curious parties. At the heart of our proposal lies a secure binary embedding scheme generated from a novel probabilistic transformation over locality sensitive hashing family. We provide information-theoretic bound for the privacy guarantees and support our theoretical claims using substantial empirical evidence on real-world datasets

    Detecting, Modeling, and Predicting User Temporal Intention

    Get PDF
    The content of social media has grown exponentially in the recent years and its role has evolved from narrating life events to actually shaping them. Unfortunately, content posted and shared in social networks is vulnerable and prone to loss or change, rendering the context associated with it (a tweet, post, status, or others) meaningless. There is an inherent value in maintaining the consistency of such social records as in some cases they take over the task of being the first draft of history as collections of these social posts narrate the pulse of the street during historic events, protest, riots, elections, war, disasters, and others as shown in this work. The user sharing the resource has an implicit temporal intent: either the state of the resource at the time of sharing, or the current state of the resource at the time of the reader \clicking . In this research, we propose a model to detect and predict the user\u27s temporal intention of the author upon sharing content in the social network and of the reader upon resolving this content. To build this model, we first examine the three aspects of the problem: the resource, time, and the user. For the resource we start by analyzing the content on the live web and its persistence. We noticed that a portion of the resources shared in social media disappear, and with further analysis we unraveled a relationship between this disappearance and time. We lose around 11% of the resources after one year of sharing and a steady 7% every following year. With this, we turn to the public archives and our analysis reveals that not all posted resources are archived and even they were an average 8% per year disappears from the archives and in some cases the archived content is heavily damaged. These observations prove that in regards to archives resources are not well-enough populated to consistently and reliably reconstruct the missing resource as it existed at the time of sharing. To analyze the concept of time we devised several experiments to estimate the creation date of the shared resources. We developed Carbon Date, a tool which successfully estimated the correct creation dates for 76% of the test sets. Since the resources\u27 creation we wanted to measure if and how they change with time. We conducted a longitudinal study on a data set of very recently-published tweet-resource pairs and recording observations hourly. We found that after just one hour, ~4% of the resources have changed by ≥30% while after a day the change rate slowed to be ~12% of the resources changed by ≥40%. In regards to the third and final component of the problem we conducted user behavioral analysis experiments and built a data set of 1,124 instances manually assigned by test subjects. Temporal intention proved to be a difficult concept for average users to understand. We developed our Temporal Intention Relevancy Model (TIRM) to transform the highly subjective temporal intention problem into the more easily understood idea of relevancy between a tweet and the resource it links to, and change of the resource through time. On our collected data set TIRM produced a significant 90.27% success rate. Furthermore, we extended TIRM and used it to build a time-based model to predict temporal intention change or steadiness at the time of posting with 77% accuracy. We built a service API around this model to provide predictions and a few prototypes. Future tools could implement TIRM to assist users in pushing copies of shared resources into public web archives to ensure the integrity of the historical record. Additional tools could be used to assist the mining of the existing social media corpus by derefrencing the intended version of the shared resource based on the intention strength and the time between the tweeting and mining

    A novel service discovery model for decentralised online social networks.

    Get PDF
    Online social networks (OSNs) have become the most popular Internet application that attracts billions of users to share information, disseminate opinions and interact with others in the online society. The unprecedented growing popularity of OSNs naturally makes using social network services as a pervasive phenomenon in our daily life. The majority of OSNs service providers adopts a centralised architecture because of its management simplicity and content controllability. However, the centralised architecture for large-scale OSNs applications incurs costly deployment of computing infrastructures and suffers performance bottleneck. Moreover, the centralised architecture has two major shortcomings: the single point failure problem and the lack of privacy, which challenges the uninterrupted service provision and raises serious privacy concerns. This thesis proposes a decentralised approach based on peer-to-peer (P2P) networks as an alternative to the traditional centralised architecture. Firstly, a self-organised architecture with self-sustaining social network adaptation has been designed to support decentralised topology maintenance. This self-organised architecture exhibits small-world characteristics with short average path length and large average clustering coefficient to support efficient information exchange. Based on this self-organised architecture, a novel decentralised service discovery model has been developed to achieve a semantic-aware and interest-aware query routing in the P2P social network. The proposed model encompasses a service matchmaking module to capture the hidden semantic information for query-service matching and a homophily-based query processing module to characterise user’s common social status and interests for personalised query routing. Furthermore, in order to optimise the efficiency of service discovery, a swarm intelligence inspired algorithm has been designed to reduce the query routing overhead. This algorithm employs an adaptive forwarding strategy that can adapt to various social network structures and achieves promising search performance with low redundant query overhead in dynamic environments. Finally, a configurable software simulator is implemented to simulate complex networks and to evaluate the proposed service discovery model. Extensive experiments have been conducted through simulations, and the obtained results have demonstrated the efficiency and effectiveness of the proposed model.University of Derb

    Measuring for privacy: From tracking to cloaking

    Get PDF
    We rely on various types of online services to access information for different uses, and often provide sensitive information during the interactions with these services. These online services are of different types; e.g. commercial websites (e.g., banking, education, news, shopping, dating, social media), essential websites (e.g., government). Online services are available through websites as well as mobile apps. The growth of web sites, mobile devices and apps that run on those devices, have resulted in the proliferation of online services. This whole ecosystem of online services had created an environment where everyone using it are being tracked. Several past studies have performed privacy measurements to assess the prevalence of tracking in online services. Most of these studies used institutional (i.e., non-residential) resources for their measurements, and lacked global perspective. Tracking on online services and its impact to privacy may differ at various locations. Therefore, to fill in this gap, we perform a privacy measurement study of popular commercial websites, using residential networks from various locations. Unlike commercial online services, there are different categories (e.g., government, hospital, religion) of essential online services where users do not expect to be tracked. The users of these essential online services often use information of extreme personal and sensitive in nature (e.g., social insurance number, health information, prayer requests/confessions made to a religious minister) when interacting with those services. However, contrary to the expectations of users, these essential services include user tracking capabilities. We built frameworks to perform privacy measurements of these online services (include both web sites and Android apps) that are of different types (i.e., governments, hospitals and religious services in jurisdictions around the world). The instrumented tracking metrics (i.e., stateless, stateful, session replaying) from the privacy measurements of these online services are then analyzed. Malicious sites (e.g., phishing) mimic online services to deceive users, causing them harm. We found 80% of analyzed malicious sites are cloaked, and not blocked by search engine crawlers. Therefore, sensitive information collected from users through these sites is exposed. In addition, underlying Internet-connected infrastructure (e.g., networked devices such as routers, modems) used by online users, can suffer from security issues due to nonuse of TLS or use of weak SSL/TLS certificates. Such security issues (e.g., spying on a CCTV camera) can compromise data integrity, confidentiality and user privacy. Overall, we found tracking on commercial websites differ based on the location of corresponding residential users. We also observed widespread use of tracking by commercial trackers, and session replay services that expose sensitive information from essential online services. Sensitive information are also exposed due to vulnerabilities in online services (e.g., Cross Site Scripting). Furthermore, a significant proportion of malicious sites evade detection by security/search engine crawlers, which may make such sites readily available to users. We also detect weaknesses in the TLS ecosystem of Internet-connected infrastructure that supports running these online services. These observations require more research on privacy of online services, as well as information exposure from malicious online services, to understand the significance of privacy issues, and to adopt appropriate mitigation strategies
    corecore