865 research outputs found
Certified Impossibility Results for Byzantine-Tolerant Mobile Robots
We propose a framework to build formal developments for robot networks using
the COQ proof assistant, to state and to prove formally various properties. We
focus in this paper on impossibility proofs, as it is natural to take advantage
of the COQ higher order calculus to reason about algorithms as abstract
objects. We present in particular formal proofs of two impossibility results
forconvergence of oblivious mobile robots if respectively more than one half
and more than one third of the robots exhibit Byzantine failures, starting from
the original theorems by Bouzid et al.. Thanks to our formalization, the
corresponding COQ developments are quite compact. To our knowledge, these are
the first certified (in the sense of formally proved) impossibility results for
robot networks
Transparent and scalable client-side server selection using netlets
Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advocated making server selection decisions at the client-side of the network. In this paper, we present a transparent approach for client-side server selection in the Internet using Netlet services. Netlets are autonomous, nomadic mobile software components which persist and roam in the network independently, providing predefined network services. In this application, Netlet based services embedded with intelligence to support server selection are deployed by servers close to potential client communities to setup dynamic service decision points within the network. An anycast address is used to identify available distributed decision points in the network. Each service decision point transparently directs client requests to the best performing server based on its in-built intelligence supported by real-time measurements from probes sent by the Netlet to each server. It is shown that the resulting system provides a client-side server selection solution which is server-customisable, scalable and fault transparent
A Radio Link Quality Model and Simulation Framework for Improving the Design of Embedded Wireless Systems
Despite the increasing application of embedded wireless systems, developers face numerous challenges during the design phase of the application life cycle. One of the critical challenges is ensuring performance reliability with respect to radio link quality. Specifically, embedded links experience exaggerated link quality variation, which results in undesirable wireless performance characteristics. Unfortunately, the resulting post-deployment behaviors often necessitate network redeployment. Another challenge is recovering from faults that commonly occur in embedded wireless systems, including node failure and state corruption. Self-stabilizing algorithms can provide recovery in the presence of such faults. These algorithms guarantee the eventual satisfaction of a given state legitimacy predicate regardless of the initial state of the network. Their practical behavior is often different from theoretical analyses. Unfortunately, there is little tool support for facilitating the experimental analysis of self-stabilizing systems. We present two contributions to support the design phase of embedded wireless system development. First, we provide two empirical models that predict radio-link quality within specific deployment environments. These models predict link performance as a function of inter-node distance and radio power level. The models are culled from extensive experimentation in open grass field and dense forest environments using all radio power levels and covering up to the maximum distances reachable by the radio. Second, we provide a simulation framework for simulating self-stabilizing algorithms. The framework provides three feature extensions: (i) fault injection to study algorithm behavior under various fault scenarios, (ii) automated detection of non-stabilizing behavior; and (iii) integration of the link quality models described above. Our contributions aim at avoiding problems that could result in the need for network redeployment
On “Sourcery,” or Code as Fetish
This essay offers a sympathetic interrogation of the move within new media studies toward “software studies.” Arguing against theoretical conceptions of programming languages as the ultimate performative utterance, it contends that source code is never simply the source of any action; rather, source code is only source code after the fact: its effectiveness depends on a whole imagined network of machines and humans. This does not mean that source code does nothing, but rather that it serves as a kind of fetish, and that the notion of the user as super agent, buttressed by real-time computation, is the obverse, not the opposite of this “sourcery.
On the performance of emerging wireless mesh networks
Wireless networks are increasingly used within pervasive computing. The recent development of low-cost sensors coupled with the decline in prices of embedded hardware and improvements in low-power low-rate wireless networks has made them ubiquitous. The sensors are becoming smaller and smarter enabling them to be embedded inside tiny hardware. They are already being used in various areas such as health care, industrial automation and environment monitoring. Thus, the data to be communicated can include room temperature, heart beat, user’s activities or seismic events. Such networks have been deployed in wide range areas and various levels of scale. The deployment can include only a couple of sensors inside human body or hundreds of sensors monitoring the environment. The sensors are capable of generating a huge amount of information when data is sensed regularly. The information has to be communicated to a central node in the sensor network or to the Internet. The sensor may be connected directly to the central node but it may also be connected via other sensor nodes acting as intermediate routers/forwarders. The bandwidth of a typical wireless sensor network is already small and the use of forwarders to pass the data to the central node decreases the network capacity even further.
Wireless networks consist of high packet loss ratio along with the low network bandwidth. The data transfer time from the sensor nodes to the central node increases with network size. Thus it becomes challenging to regularly communicate the sensed data especially when the network grows in size. Due to this problem, it is very difficult to create a scalable sensor network which can regularly communicate sensor data. The problem can be tackled either by improving the available network bandwidth or by reducing the amount of data communicated in the network. It is not possible to improve the network bandwidth as power limitation on the devices restricts the use of faster network standards. Also it is not acceptable to reduce the quality of the sensed data leading to loss of information before communication. However the data can be modified without losing any information using compression techniques and the processing power of embedded devices are improving to make it possible.
In this research, the challenges and impacts of data compression on embedded devices is studied with an aim to improve the network performance and the scalability of sensor networks. In order to evaluate this, firstly messaging protocols which are suitable for embedded devices are studied and a messaging model to communicate sensor data is determined. Then data compression techniques which can be implemented on devices with limited resources and are suitable to compress typical sensor data are studied. Although compression can reduce the amount of data to be communicated over a wireless network, the time and energy costs of the process must be considered to justify the benefits. In other words, the combined compression and data transfer time must also be smaller than the uncompressed data transfer time. Also the compression and data transfer process must consume less energy than the uncompressed data transfer process. The network communication is known to be more expensive than the on-device computation in terms of energy consumption. A data sharing system is created to study the time and energy consumption trade-off of compression techniques. A mathematical model is also used to study the impact of compression on the overall network performance of various scale of sensor networks
Practical issues of implementing a hybrid multi-NIC wireless mesh-network
Testbeds are a powerful tool to study wireless mesh and sensor networks as
close as possible to real world application scenarios. In contrast to
simulation or analytical approaches these installations face various kinds of
environment parameters. Challenges related to the shared physical medium,
operating system, and used hardware components do arise. In this technical
report about the work-in-progress Distributed Embedded Systems testbed of 100
routers deployed at the Freie Universität Berlin we focis on the software
architecture and give and introduction to the network protocol stack of the
Linux kernel. Furthermore, we discuss our first experiences with a pilot
network setup, the encountered problems and the achieved solutions. This
writing continues our first publication and builds upon the discussed overall
testbed architecture, our experiment methodology, and aspired research
objectives
Development of mobile agent framework in wireless sensor networks for multi-sensor collaborative processing
Recent advances in processor, memory and radio technology have enabled production of tiny, low-power, low-cost sensor nodes capable of sensing, communication and computation. Although a single node is resource constrained with limited power, limited computation and limited communication bandwidth, these nodes deployed in large number form a new type of network called the wireless sensor network (WSN). One of the challenges brought by WSNs is an efficient computing paradigm to support the distributed nature of the applications built on these networks considering the resource limitations of the sensor nodes. Collaborative processing between multiple sensor nodes is essential to generate fault-tolerant, reliable information from the densely-spatial sensing phenomenon. The typical model used in distributed computing is the client/server model. However, this computing model is not appropriate in the context of sensor networks. This thesis develops an energy-efficient, scalable and real-time computing model for collaborative processing in sensor networks called the mobile agent computing paradigm. In this paradigm, instead of each sensor node sending data or result to a central server which is typical in the client/server model, the information processing code is moved to the nodes using mobile agents. These agents carry the execution code and migrate from one node to another integrating result at each node. This thesis develops the mobile agent framework on top of an energy-efficient routing protocol called directed diffusion. The mobile agent framework described has been mapped to collaborative target classification application. This application has been tested in three field demos conducted at Twentynine palms, CA; BAE Austin, TX; and BBN Waltham, MA
Recommended from our members
Abstracting information on body area networks
Healthcare is changing, correction...healthcare is in need of change. The population ageing, the increase in chronic and heart diseases and just the increase in population size will overwhelm the current hospital-centric healthcare.
There is a growing interest by individuals to monitor their own physiology. Not only for sport activities, but also to control their own diseases. They are changing from the passive healthcare receiver to a proactive self-healthcare taker. The focus is shifting from hospital centred treatment to a patient-centric healthcare monitoring.
Continuous, everyday, wearable monitoring and actuating is part of this change. In this setting, sensors that monitor the heart, blood pressure, movement, brain activity, dopamine levels, and actuators that pump insulin, “pump” the heart, deliver drugs to specific organs, stimulate the brain are needed as pervasive components in and on the body. They will tend for people’s need of self-monitoring and facilitate healthcare delivery.
These components around a human body that communicate to sense and act in a coordinated fashion make a Body Area Network (BAN). In most cases, and in our view, a central, more powerful component will act as the coordinator of this network. These networks aim to augment the power to monitor the human body and react to problems discovered with this observation. One key advantage of this system is their overarching view of the whole network. That is, the central component can have an understanding of all the monitored signals and correlate them to better evaluate and react to problems. This is the focus of our thesis.
In this document we argue that this multi-parameter correlation of the heterogeneous sensed information is not being handled in BANs. The current view depends exclusively on the applica- tion that is using the network and its understanding of the parameters. This means that every application will oversee the BAN’s heterogeneous resources managing them directly without taking into consideration other applications, their needs and knowledge.
There are several physiological correlations already known by the medical field. Correlating blood pressure and cross sectional area of blood vessels to calculate blood velocity, estimating oxygen delivery from cardiac output and oxygen saturation, are such examples. This knowledge should be available in a BAN and shared by the several applications that make use of the network. This architecture implies a central component that manages the knowledge and the resources. And this is, in our view, missing in BANs.
Our proposal is a middleware layer that abstracts the underlying BAN’s resources to the applica- tion, providing instead an information model to be queried. The model describes the correlations for producing new information that the middleware knows about. Naturally, the raw sensed data is also part of the model. The middleware hides the specificities of the nodes that constitute the BAN, by making available their sensed production. Applications are able to query for information attaching requirements to these requests. The middleware is then responsible for satisfying the requests while optimising the resource usage of the BAN.
Our architecture proposal is divided in two corresponding layers, one that abstracts the nodes’ hardware (hiding node’s particularities) and the information layer that describes information available and how it is correlated. A prototype implementation of the architecture was done to illustrate the concept.This work was partially supported by PhD scholarship SFRH/BD/28843/2006 from Fundação da Ciência e Tecnologia from Portugal
- …