7,478 research outputs found

    A distributed alerting service for open digital library software

    Get PDF
    Alerting for Digital Libraries (DL) is an important and useful feature for the library users. To date, two independent services and a few publisher-hosted proprietary services have been developed. Here, we address the problem of integrating alerting as functionality into open source software for distributed digital libraries. DL software is one application out of many that constitute so-called meta-software: software where its installation determines the properties of the actual running system (here: the Digital Library system). For this type of application, existing alerting solutions are insufficient; new ways have to be found for supporting a fragmented network of distributed digital library servers. We propose the design and usage of a distributed Directory Service. This paper also introduces our hybrid approach using two networks and a combination of different distributed routing strategies for event filtering

    Document Archiving, Replication and Migration Container for Mobile Web Users

    Full text link
    With the increasing use of mobile workstations for a wide variety of tasks and associated information needs, and with many variations of available networks, access to data becomes a prime consideration. This paper discusses issues of workstation mobility and proposes a solution wherein the data structures are accessed in an encapsulated form - through the Portable File System (PFS) wrapper. The paper discusses an implementation of the Portable File System, highlighting the architecture and commenting upon performance of an experimental system. Although investigations have been focused upon mobile access of WWW documents, this technique could be applied to any mobile data access situation.Comment: 5 page

    Mobile Computing in Physics Analysis - An Indicator for eScience

    Full text link
    This paper presents the design and implementation of a Grid-enabled physics analysis environment for handheld and other resource-limited computing devices as one example of the use of mobile devices in eScience. Handheld devices offer great potential because they provide ubiquitous access to data and round-the-clock connectivity over wireless links. Our solution aims to provide users of handheld devices the capability to launch heavy computational tasks on computational and data Grids, monitor the jobs status during execution, and retrieve results after job completion. Users carry their jobs on their handheld devices in the form of executables (and associated libraries). Users can transparently view the status of their jobs and get back their outputs without having to know where they are being executed. In this way, our system is able to act as a high-throughput computing environment where devices ranging from powerful desktop machines to small handhelds can employ the power of the Grid. The results shown in this paper are readily applicable to the wider eScience community.Comment: 8 pages, 7 figures. Presented at the 3rd Int Conf on Mobile Computing & Ubiquitous Networking (ICMU06. London October 200

    Blindspot: Indistinguishable Anonymous Communications

    Get PDF
    Communication anonymity is a key requirement for individuals under targeted surveillance. Practical anonymous communications also require indistinguishability - an adversary should be unable to distinguish between anonymised and non-anonymised traffic for a given user. We propose Blindspot, a design for high-latency anonymous communications that offers indistinguishability and unobservability under a (qualified) global active adversary. Blindspot creates anonymous routes between sender-receiver pairs by subliminally encoding messages within the pre-existing communication behaviour of users within a social network. Specifically, the organic image sharing behaviour of users. Thus channel bandwidth depends on the intensity of image sharing behaviour of users along a route. A major challenge we successfully overcome is that routing must be accomplished in the face of significant restrictions - channel bandwidth is stochastic. We show that conventional social network routing strategies do not work. To solve this problem, we propose a novel routing algorithm. We evaluate Blindspot using a real-world dataset. We find that it delivers reasonable results for applications requiring low-volume unobservable communication.Comment: 13 Page

    Agents, Bookmarks and Clicks: A topical model of Web traffic

    Full text link
    Analysis of aggregate and individual Web traffic has shown that PageRank is a poor model of how people navigate the Web. Using the empirical traffic patterns generated by a thousand users, we characterize several properties of Web traffic that cannot be reproduced by Markovian models. We examine both aggregate statistics capturing collective behavior, such as page and link traffic, and individual statistics, such as entropy and session size. No model currently explains all of these empirical observations simultaneously. We show that all of these traffic patterns can be explained by an agent-based model that takes into account several realistic browsing behaviors. First, agents maintain individual lists of bookmarks (a non-Markovian memory mechanism) that are used as teleportation targets. Second, agents can retreat along visited links, a branching mechanism that also allows us to reproduce behaviors such as the use of a back button and tabbed browsing. Finally, agents are sustained by visiting novel pages of topical interest, with adjacent pages being more topically related to each other than distant ones. This modulates the probability that an agent continues to browse or starts a new session, allowing us to recreate heterogeneous session lengths. The resulting model is capable of reproducing the collective and individual behaviors we observe in the empirical data, reconciling the narrowly focused browsing patterns of individual users with the extreme heterogeneity of aggregate traffic measurements. This result allows us to identify a few salient features that are necessary and sufficient to interpret the browsing patterns observed in our data. In addition to the descriptive and explanatory power of such a model, our results may lead the way to more sophisticated, realistic, and effective ranking and crawling algorithms.Comment: 10 pages, 16 figures, 1 table - Long version of paper to appear in Proceedings of the 21th ACM conference on Hypertext and Hypermedi

    A Generic Alerting Service for Digital Libraries

    Get PDF
    Users of modern digital libraries (DLs) can keep themselves up-to-date by searching and browsing their favorite collections, or more conveniently by resorting to an alerting service. The alerting service notifies its clients about new or changed documents. Proprietary and mediating alerting services fail to fluidly integrate information from differing collections. This paper analyses the conceptual requirements of this much-sought after service for digital libraries. We demonstrate that the differing concepts of digital libraries and its underlying technical design has extensive influence (a) the expectations, needs and interests of users regarding an alerting service, and (b) on the technical possibilities of the implementation of the service. Our findings will show that the range of issues surrounding alerting services for digital libraries, their design and use is greater than one may anticipate. We also show that, conversely, the requirements for an alerting service have considerable impact on the concepts of DL design. Our findings should be of interest for librarians as well as system designers. We highlight and discuss the far-reaching implications for the design of, and interaction with, libraries. This paper discusses the lessons learned from building such a distributed alerting service. We present our prototype implementation as a proof-of-concept for an alerting service for open DL software

    POOL File Catalog, Collection and Metadata Components

    Full text link
    The POOL project is the common persistency framework for the LHC experiments to store petabytes of experiment data and metadata in a distributed and grid enabled way. POOL is a hybrid event store consisting of a data streaming layer and a relational layer. This paper describes the design of file catalog, collection and metadata components which are not part of the data streaming layer of POOL and outlines how POOL aims to provide transparent and efficient data access for a wide range of environments and use cases - ranging from a large production site down to a single disconnected laptops. The file catalog is the central POOL component translating logical data references to physical data files in a grid environment. POOL collections with their associated metadata provide an abstract way of accessing experiment data via their logical grouping into sets of related data objects.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, 1 eps figure, PSN MOKT00

    Performance Analysis of Publish/Subscribe Systems

    Full text link
    The Desktop Grid offers solutions to overcome several challenges and to answer increasingly needs of scientific computing. Its technology consists mainly in exploiting resources, geographically dispersed, to treat complex applications needing big power of calculation and/or important storage capacity. However, as resources number increases, the need for scalability, self-organisation, dynamic reconfigurations, decentralisation and performance becomes more and more essential. Since such properties are exhibited by P2P systems, the convergence of grid computing and P2P computing seems natural. In this context, this paper evaluates the scalability and performance of P2P tools for discovering and registering services. Three protocols are used for this purpose: Bonjour, Avahi and Free-Pastry. We have studied the behaviour of theses protocols related to two criteria: the elapsed time for registrations services and the needed time to discover new services. Our aim is to analyse these results in order to choose the best protocol we can use in order to create a decentralised middleware for desktop grid
    corecore