17 research outputs found

    Fusion Channels: A Multi-sensor Data Fusion Architecture

    Get PDF
    Due to the falling price and availability of sensors, information capture and processing at a realtime or soft realtime rate is emerging as a dominating application space. This class includes interactive multimedia, robotics, security and surveillance applications and many more. A common denominator of these applications is fusion of data gathered by various sensors and data aggregators. In this paper we propose a Data Fusion architecture, specifically geared toward such multi-sensor data fusion applications and report on the prototype we have built. Our infrastructure provides a programming abstraction that offers programming ease, at the same time provides built-in optimizations that are quite complicated to implement from scratch. We show the ease of programming through two sample applications and also demonstrate through various experiments that our system has low overhead and offers better performance compared to otherwise naively written fusion routines. We also demonstrate improved scalability

    Mediabroker: An architecture for pervasive computing

    No full text
    MediaBroker is a distributed framework designed to support pervasive computing applications. Specifically, the architecture consists of a transport engine and peripheral clients and addresses issues in scalability, data sharing, data transformation and platform heterogeneity. Key features of MediaBroker are a type-aware data transport that is capable of dynamically transforming data en route from source to sinks; an extensible system for describing types of streaming data; and the interaction between the transformation engine and the type system. Details of the MediaBroker architecture and implementation are presented in this paper. Through experimental study, we show reasonable performance for selected streaming media-intensive applications. For example, relative to baseline TCP performance, MediaBroker incurs under 11 % latency overhead and achieves roughly 80 % of the TCP throughput when streaming items larger than 100 KB across our infrastructure

    Causal Memory: Definitions, Implementation, and Programming

    No full text
    The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by defining causal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that are causally related. Because causal memory is weakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory

    State Management in .NET Web Services

    Get PDF
    In the paper, we identify a problem for certain applications wishing to use the web service paradigm to enhance interoperability: rapid, robust state maintenance. We classify two kinds of state: application state and session state. While many features are available to support session data, special mechanisms for application state maintenance are less well developed. Application state maintenance is integral to providing reliable, fault-tolerant web services. We discuss three different models to solve the problem and compare the advantages and disadvantages of each. Experimental results show that the choice of which model to use depends on application requirements. Many important emerging applications will involve the communication of potentially large time-sequenced data streams among heterogeneous clients with varying QoS requirements. D-Stampede.NET is an implementation of a system designed to support the development of such applications. We describe our web service entation along with our state server solution to the application state management problem. A simple demo application is described and measured to validate performance

    State Management in .NET . . .

    No full text
    In the paper, we identify a problem for certain applications wishing to use the web service paradigm to enhance interoperability: rapid, robust state maintenance. We classify two kinds of state: application state and session state. While many features are available to support session data, special mechanisms for application state maintenance are less well developed. Application state maintenance is integral to providing reliable, fault-tolerant web services. We discuss three different models to solve the problem and compare the advantages and disadvantages of each. Experimental results show that the choice of which model to use depends on application requirements. Many important emerging applications will involve the communication of potentially large time-sequenced data streams among heterogeneous clients with varying QoS requirements. D-Stampede.NET is an implementation of a system designed to support the development of such applications. We describe our web service implementation along with our state server solution to the application state management problem. A simple demo application is described and measured to validate performance

    DFuse: A Framework for Distributed Data Fusion

    No full text
    Simple in-network data aggregation (or fusion) techniques for sensor networks have been the focus of several recent research efforts, but they are insufficient to support advanced fusion applications. We extend these techniques to future sensor networks and ask two related questions: (a) what is the appropriate set of data fusion techniques, and (b) how do we dynamically assign aggregation roles to the nodes of a sensor network ? We have developed an architectural framework, DFuse, for answering these two questions. It consists of a data fusion API and a distributed algorithm for energy-aware role assignment. The fusion API enables an application to be specified as a coarsegrained dataflow graph, and eases application development and deployment. The role assignment algorithm maps the graph onto the network, and optimally adapts the mapping at run-time using role migration. Experiments on an iPAQ farm show that the fusion API has low-overhead, and the role assignment algorithm with role migration significantly increases the network lifetime compared to any static assignment
    corecore