8 research outputs found

    Accessing files in an Internet: The Jade file system

    Get PDF
    Jade is a new distribution file system that provides a uniform way to name and access files in an internet environment. It makes two important contributions. First, Jade is a logical system that integrates a heterogeneous collection of existing file systems, where heterogeneous means that the underlying file systems support different file access protocols. Jade is designed under the restriction that the underlying file system may not be modified. Second, rather than providing a global name space, Jade permits each user to define a private name space. These private name spaces support two novel features: they allow multiple file systems to be mounted under one directory, and they allow one logical name space to mount other logical name spaces. A prototype of the Jade File System was implemented on Sun Workstations running Unix. It consists of interfaces to the Unix file system, the Sun Network File System, the Andrew File System, and FTP. This paper motivates Jade's design, highlights several aspects of its implementation, and illustrates applications that can take advantage of its features

    [MODIS Investigation]

    Get PDF
    The objective of the last six months were: (1) Continue analysis of Hawaii Ocean Time-series (HOT) bio-optical mooring data, and Southern Ocean bio-optical drifter data; (2) Complete development of documentation of MOCEAN algorithms and software for use by MOCEAN team and GLI team; (3) Deploy instrumentation during JGOFS cruises in the Southern Ocean; (4) Participate in test cruise for Fast Repetition Rate (FRR) fluorometer; (5) Continue chemostat experiments on the relationship of fluorescence quantum yield to environmental factors; and (6) Continue to develop and expand browser-based information system for in situ bio-optical data. We are continuing to analyze bio-optical data collected at the Hawaii Ocean Time Series mooring as well as data from bio-optical drifters that were deployed in the Southern Ocean. A draft manuscript has now been prepared and is being revised. A second manuscript is also in preparation that explores the vector wind fields derived from NSCAT measurements. The HOT bio-optical mooring was recovered in December 1997. After retrieving the data, the sensor package was serviced and redeployed. We have begun preliminary analysis of these data, but we have only had the data for 3 weeks. However, all of the data were recovered, and there were no obvious anomalies. We will add second sensor package to the mooring when it is serviced next spring. In addition, Ricardo Letelier is funded as part of the SeaWiFS calibration/validation effort (through a subcontract from the University of Hawaii, Dr. John Porter), and he will be collecting bio-optical and fluorescence data as part of the HOT activity. This will provide additional in situ measurements for MODIS validation. As noted in the previous quarterly report, we have been analyzing data from three bio-optical drifters that were deployed in the Southern Ocean in September 1996. We presented results on chlorophyll and drifter speed. For the 1998 Ocean Sciences meeting, a paper will be presented on this data set, focusing on the diel variations in fluorescence quantum yield. Briefly, there are systematic patterns in the apparent quantum yield of fluorescence (defined as the slope of the line relating fluorescence/chlorophyll and incoming solar radiation). These systematic variations appear to be related to changes in the circulation of the Antarctic Polar Front which force nutrients into the upper ocean. A more complete analysis will be provided in the next Quarterly report

    The influence of scale on distributed file system design

    Full text link

    Univers: The construction of an internet-wide descriptive naming system

    Get PDF
    Descriptive naming systems allow clients to identify a set of objects by description. Described here is the construction of a descriptive naming system, called Univers, based on a model in which clients provide both an object description and some meta-information. The meta-information describes beliefs about the query and the naming system. Specifically, it is an ordering on a set of perfect world approximations, and it describes the preferred methods for accommodating imperfect information. The description is then resolved in a way that respects the preferred approximations

    Software Packaging

    Get PDF
    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incom patible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When conf iguring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This thesis describes a process called software packaging that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Whereas previous efforts focused solely on integration mechanisms, software packaging provides a context that relates such mechanisms to software integration processes. We demonstrate the value of this approach by reducing the cost of configuring applications whose components are distributed and implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX MAKE by providing a rule-based approach to software integration that is independent of execution environments. (Also cross-referenced as UMIACS-TR-93-56

    Balancing Interactive Performance and Budgeted Resources in Mobile Computing.

    Full text link
    In this dissertation, we explore the various limited resources involved in mobile applications --- battery energy, cellular data usage, and, critically, user attention --- and we devise principled methods for managing the tradeoffs involved in creating a good user experience. Building quality mobile applications requires developers to understand complex interactions between network usage, performance, and resource consumption. Because of this difficulty, developers commonly choose simple but suboptimal approaches that strictly prioritize performance or resource conservation. These extremes are symptoms of a lack of system-provided abstractions for managing the complexity inherent in managing performance/resource tradeoffs. By providing abstractions that help applications manage these tradeoffs, mobile systems can significantly improve user-visible performance without exhausting resource budgets. This dissertation explores three such abstractions in detail. We first present Intentional Networking, a system that provides synchronization primitives and intelligent scheduling for multi-network traffic. Next, we present Informed Mobile Prefetching, a system that helps applications decide when to prefetch data and how aggressively to spend limited battery energy and cellular data resources toward that end. Finally, we present Meatballs, a library that helps applications consider the cloudy nature of predictions when making decisions, selectively employing redundancy to mitigate uncertainty and provide more reliable performance. Overall, experiments show that these abstractions can significantly reduce interactive delay without overspending the available energy and data resources.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108956/1/brettdh_1.pd

    Enabling Censorship Tolerant Networking

    Get PDF
    Billions of people in the world live under heavy information censorship. We propose a new class of delay tolerant network (DTN), known as a censorship tolerant network (CTN), to counter the growing practice of Internet-based censorship. CTNs should provide strict guarantees on the privacy of both information shared within the network and the identities of network participants. CTN software needs to be publicly available as open source software and run on personal mobile devices with real-world computational, storage, and energy constraints. We show that these simple assumptions and system constraints have a non-obvious impact on the design and implementation of CTNs, and serve to differentiate our system design from previous work. We design data routing within a CTN using a new paradigm: one where nodes operate selfishly to maximize their own utility, make decisions based only on their own observations, and only communicate with nodes they trust. We introduce the Laissez-faire framework, an incentivized approach to CTN routing. Laissez-faire does not mandate any specific routing protocol, but requires that each node implement tit-for-tat by keeping track of the data exchanged with other trusted nodes. We propose several strategies for valuing and retrieving content within a CTN. We build a prototype BlackBerry implementation and conduct both controlled lab and field trials, and show how each strategy adapts to different network conditions. We further demonstrate that, unlike existing approaches to routing, Laissez-faire prevents free-riding. We build an efficient and reliable data transport protocol on top of the Short Message Service (SMS) to serve a control channel for the CTN. We conduct a series of experiments to characterise SMS behaviour under bursty, unconventional workloads. This study examines how variables such as the transmission order, delay between transmissions, the network interface used, and the time-of-day affect the service. We present the design and implementation of our transport protocol. We show that by adapting to the unique channel conditions of SMS we can reduce message overheads by as much as 50\% and increase data throughput by as much as 545% over the approach used by existing applications. A CTN's dependency on opportunistic communication imposes a significant burden on smartphone energy resources. We conduct a large-scale user study to measure the energy consumption characteristics of 20100 smartphone users. Our dataset is two orders of magnitude larger than any previous work. We use this dataset to build the Energy Emulation Toolkit (EET) that allows developers to evaluate the energy consumption requirements of their applications against real users' energy traces. The EET computes the successful execution rate of energy-intensive applications across all users, specific devices, and specific smartphone user-types. We also consider active adaptation to energy constraints. By classifying smartphone users based on their charging characteristics we demonstrate that energy level can be predicted within 72% accuracy a full day in advance, and through an Energy Management Oracle energy intensive applications, such as CTNs, can adapt their execution to maintain the operation of the host device
    corecore