865 research outputs found

    Certified Impossibility Results for Byzantine-Tolerant Mobile Robots

    Get PDF
    We propose a framework to build formal developments for robot networks using the COQ proof assistant, to state and to prove formally various properties. We focus in this paper on impossibility proofs, as it is natural to take advantage of the COQ higher order calculus to reason about algorithms as abstract objects. We present in particular formal proofs of two impossibility results forconvergence of oblivious mobile robots if respectively more than one half and more than one third of the robots exhibit Byzantine failures, starting from the original theorems by Bouzid et al.. Thanks to our formalization, the corresponding COQ developments are quite compact. To our knowledge, these are the first certified (in the sense of formally proved) impossibility results for robot networks

    Transparent and scalable client-side server selection using netlets

    Get PDF
    Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advocated making server selection decisions at the client-side of the network. In this paper, we present a transparent approach for client-side server selection in the Internet using Netlet services. Netlets are autonomous, nomadic mobile software components which persist and roam in the network independently, providing predefined network services. In this application, Netlet based services embedded with intelligence to support server selection are deployed by servers close to potential client communities to setup dynamic service decision points within the network. An anycast address is used to identify available distributed decision points in the network. Each service decision point transparently directs client requests to the best performing server based on its in-built intelligence supported by real-time measurements from probes sent by the Netlet to each server. It is shown that the resulting system provides a client-side server selection solution which is server-customisable, scalable and fault transparent

    A Radio Link Quality Model and Simulation Framework for Improving the Design of Embedded Wireless Systems

    Get PDF
    Despite the increasing application of embedded wireless systems, developers face numerous challenges during the design phase of the application life cycle. One of the critical challenges is ensuring performance reliability with respect to radio link quality. Specifically, embedded links experience exaggerated link quality variation, which results in undesirable wireless performance characteristics. Unfortunately, the resulting post-deployment behaviors often necessitate network redeployment. Another challenge is recovering from faults that commonly occur in embedded wireless systems, including node failure and state corruption. Self-stabilizing algorithms can provide recovery in the presence of such faults. These algorithms guarantee the eventual satisfaction of a given state legitimacy predicate regardless of the initial state of the network. Their practical behavior is often different from theoretical analyses. Unfortunately, there is little tool support for facilitating the experimental analysis of self-stabilizing systems. We present two contributions to support the design phase of embedded wireless system development. First, we provide two empirical models that predict radio-link quality within specific deployment environments. These models predict link performance as a function of inter-node distance and radio power level. The models are culled from extensive experimentation in open grass field and dense forest environments using all radio power levels and covering up to the maximum distances reachable by the radio. Second, we provide a simulation framework for simulating self-stabilizing algorithms. The framework provides three feature extensions: (i) fault injection to study algorithm behavior under various fault scenarios, (ii) automated detection of non-stabilizing behavior; and (iii) integration of the link quality models described above. Our contributions aim at avoiding problems that could result in the need for network redeployment

    On “Sourcery,” or Code as Fetish

    Get PDF
    This essay offers a sympathetic interrogation of the move within new media studies toward “software studies.” Arguing against theoretical conceptions of programming languages as the ultimate performative utterance, it contends that source code is never simply the source of any action; rather, source code is only source code after the fact: its effectiveness depends on a whole imagined network of machines and humans. This does not mean that source code does nothing, but rather that it serves as a kind of fetish, and that the notion of the user as super agent, buttressed by real-time computation, is the obverse, not the opposite of this “sourcery.

    On the performance of emerging wireless mesh networks

    Get PDF
    Wireless networks are increasingly used within pervasive computing. The recent development of low-cost sensors coupled with the decline in prices of embedded hardware and improvements in low-power low-rate wireless networks has made them ubiquitous. The sensors are becoming smaller and smarter enabling them to be embedded inside tiny hardware. They are already being used in various areas such as health care, industrial automation and environment monitoring. Thus, the data to be communicated can include room temperature, heart beat, user’s activities or seismic events. Such networks have been deployed in wide range areas and various levels of scale. The deployment can include only a couple of sensors inside human body or hundreds of sensors monitoring the environment. The sensors are capable of generating a huge amount of information when data is sensed regularly. The information has to be communicated to a central node in the sensor network or to the Internet. The sensor may be connected directly to the central node but it may also be connected via other sensor nodes acting as intermediate routers/forwarders. The bandwidth of a typical wireless sensor network is already small and the use of forwarders to pass the data to the central node decreases the network capacity even further. Wireless networks consist of high packet loss ratio along with the low network bandwidth. The data transfer time from the sensor nodes to the central node increases with network size. Thus it becomes challenging to regularly communicate the sensed data especially when the network grows in size. Due to this problem, it is very difficult to create a scalable sensor network which can regularly communicate sensor data. The problem can be tackled either by improving the available network bandwidth or by reducing the amount of data communicated in the network. It is not possible to improve the network bandwidth as power limitation on the devices restricts the use of faster network standards. Also it is not acceptable to reduce the quality of the sensed data leading to loss of information before communication. However the data can be modified without losing any information using compression techniques and the processing power of embedded devices are improving to make it possible. In this research, the challenges and impacts of data compression on embedded devices is studied with an aim to improve the network performance and the scalability of sensor networks. In order to evaluate this, firstly messaging protocols which are suitable for embedded devices are studied and a messaging model to communicate sensor data is determined. Then data compression techniques which can be implemented on devices with limited resources and are suitable to compress typical sensor data are studied. Although compression can reduce the amount of data to be communicated over a wireless network, the time and energy costs of the process must be considered to justify the benefits. In other words, the combined compression and data transfer time must also be smaller than the uncompressed data transfer time. Also the compression and data transfer process must consume less energy than the uncompressed data transfer process. The network communication is known to be more expensive than the on-device computation in terms of energy consumption. A data sharing system is created to study the time and energy consumption trade-off of compression techniques. A mathematical model is also used to study the impact of compression on the overall network performance of various scale of sensor networks

    Practical issues of implementing a hybrid multi-NIC wireless mesh-network

    Get PDF
    Testbeds are a powerful tool to study wireless mesh and sensor networks as close as possible to real world application scenarios. In contrast to simulation or analytical approaches these installations face various kinds of environment parameters. Challenges related to the shared physical medium, operating system, and used hardware components do arise. In this technical report about the work-in-progress Distributed Embedded Systems testbed of 100 routers deployed at the Freie Universität Berlin we focis on the software architecture and give and introduction to the network protocol stack of the Linux kernel. Furthermore, we discuss our first experiences with a pilot network setup, the encountered problems and the achieved solutions. This writing continues our first publication and builds upon the discussed overall testbed architecture, our experiment methodology, and aspired research objectives

    Development of mobile agent framework in wireless sensor networks for multi-sensor collaborative processing

    Get PDF
    Recent advances in processor, memory and radio technology have enabled production of tiny, low-power, low-cost sensor nodes capable of sensing, communication and computation. Although a single node is resource constrained with limited power, limited computation and limited communication bandwidth, these nodes deployed in large number form a new type of network called the wireless sensor network (WSN). One of the challenges brought by WSNs is an efficient computing paradigm to support the distributed nature of the applications built on these networks considering the resource limitations of the sensor nodes. Collaborative processing between multiple sensor nodes is essential to generate fault-tolerant, reliable information from the densely-spatial sensing phenomenon. The typical model used in distributed computing is the client/server model. However, this computing model is not appropriate in the context of sensor networks. This thesis develops an energy-efficient, scalable and real-time computing model for collaborative processing in sensor networks called the mobile agent computing paradigm. In this paradigm, instead of each sensor node sending data or result to a central server which is typical in the client/server model, the information processing code is moved to the nodes using mobile agents. These agents carry the execution code and migrate from one node to another integrating result at each node. This thesis develops the mobile agent framework on top of an energy-efficient routing protocol called directed diffusion. The mobile agent framework described has been mapped to collaborative target classification application. This application has been tested in three field demos conducted at Twentynine palms, CA; BAE Austin, TX; and BBN Waltham, MA
    corecore