6,681 research outputs found

    Randomized Initialization of a Wireless Multihop Network

    Full text link
    Address autoconfiguration is an important mechanism required to set the IP address of a node automatically in a wireless network. The address autoconfiguration, also known as initialization or naming, consists to give a unique identifier ranging from 1 to nn for a set of nn indistinguishable nodes. We consider a wireless network where nn nodes (processors) are randomly thrown in a square XX, uniformly and independently. We assume that the network is synchronous and two nodes are able to communicate if they are within distance at most of rr of each other (rr is the transmitting/receiving range). The model of this paper concerns nodes without the collision detection ability: if two or more neighbors of a processor uu transmit concurrently at the same time, then uu would not receive either messages. We suppose also that nodes know neither the topology of the network nor the number of nodes in the network. Moreover, they start indistinguishable, anonymous and unnamed. Under this extremal scenario, we design and analyze a fully distributed protocol to achieve the initialization task for a wireless multihop network of nn nodes uniformly scattered in a square XX. We show how the transmitting range of the deployed stations can affect the typical characteristics such as the degrees and the diameter of the network. By allowing the nodes to transmit at a range r= \sqrt{\frac{(1+\ell) \ln{n} \SIZE}{\pi n}} (slightly greater than the one required to have a connected network), we show how to design a randomized protocol running in expected time O(n3/2log⁥2n)O(n^{3/2} \log^2{n}) in order to assign a unique number ranging from 1 to nn to each of the nn participating nodes

    Design verification of SIFT

    Get PDF
    A SIFT reliable aircraft control computer system, designed to meet the ultrahigh reliability required for safety critical flight control applications by use of processor replications and voting, was constructed for SRI, and delivered to NASA Langley for evaluation in the AIRLAB. To increase confidence in the reliability projections for SIFT, produced by a Markov reliability model, SRI constructed a formal specification, defining the meaning of reliability in the context of flight control. A further series of specifications defined, in increasing detail, the design of SIFT down to pre- and post-conditions on Pascal code procedures. Mechanically checked mathematical proofs were constructed to demonstrate that the more detailed design specifications for SIFT do indeed imply the formal reliability requirement. An additional specification defined some of the assumptions made about SIFT by the Markov model, and further proofs were constructed to show that these assumptions, as expressed by that specification, did indeed follow from the more detailed design specifications for SIFT. This report provides an outline of the methodology used for this hierarchical specification and proof, and describes the various specifications and proofs performed

    Labor Relations in the Broadcasting Industry

    Get PDF

    Extremal Properties of Three Dimensional Sensor Networks with Applications

    Full text link
    In this paper, we analyze various critical transmitting/sensing ranges for connectivity and coverage in three-dimensional sensor networks. As in other large-scale complex systems, many global parameters of sensor networks undergo phase transitions: For a given property of the network, there is a critical threshold, corresponding to the minimum amount of the communication effort or power expenditure by individual nodes, above (resp. below) which the property exists with high (resp. a low) probability. For sensor networks, properties of interest include simple and multiple degrees of connectivity/coverage. First, we investigate the network topology according to the region of deployment, the number of deployed sensors and their transmitting/sensing ranges. More specifically, we consider the following problems: Assume that nn nodes, each capable of sensing events within a radius of rr, are randomly and uniformly distributed in a 3-dimensional region R\mathcal{R} of volume VV, how large must the sensing range be to ensure a given degree of coverage of the region to monitor? For a given transmission range, what is the minimum (resp. maximum) degree of the network? What is then the typical hop-diameter of the underlying network? Next, we show how these results affect algorithmic aspects of the network by designing specific distributed protocols for sensor networks

    Provenance-enabled Packet Path Tracing in the RPL-based Internet of Things

    Full text link
    The interconnection of resource-constrained and globally accessible things with untrusted and unreliable Internet make them vulnerable to attacks including data forging, false data injection, and packet drop that affects applications with critical decision-making processes. For data trustworthiness, reliance on provenance is considered to be an effective mechanism that tracks both data acquisition and data transmission. However, provenance management for sensor networks introduces several challenges, such as low energy, bandwidth consumption, and efficient storage. This paper attempts to identify packet drop (either maliciously or due to network disruptions) and detect faulty or misbehaving nodes in the Routing Protocol for Low-Power and Lossy Networks (RPL) by following a bi-fold provenance-enabled packed path tracing (PPPT) approach. Firstly, a system-level ordered-provenance information encapsulates the data generating nodes and the forwarding nodes in the data packet. Secondly, to closely monitor the dropped packets, a node-level provenance in the form of the packet sequence number is enclosed as a routing entry in the routing table of each participating node. Lossless in nature, both approaches conserve the provenance size satisfying processing and storage requirements of IoT devices. Finally, we evaluate the efficacy of the proposed scheme with respect to provenance size, provenance generation time, and energy consumption.Comment: 14 pages, 18 Figure

    Thinking Informatically

    Get PDF
    On being promoted to a personal chair in 1993 I chose the title of Professor of Informatics, specifically acknowledging Donna Haraway’s definition of the term as the “technologies of information [and communication] as well as the biological, social, linguistic and cultural changes that initiate, accompany and complicate their development” [1]. This neatly encapsulated the plethora of issues emanating from these new technologies, inviting contributions and analyses from a wide variety of disciplines and practices. (In my later work Thinking Informatically [2] I added the phrase “and communication”.) In the intervening time the word informatics itself has been appropriated by those more focused on computer science, although why an alternative term is needed for a well-understood area is not entirely clear. Indeed the term is used both as an alternative term and as an additional one—i.e. “computer science and informatics”

    Regulating the Raters: The Law and Economics of Ratings Firms

    Get PDF
    Consumers and producers frequently rely on product ratings, such as college rankings, restaurant reviews and bond ratings. While much has been written about the structure of ratings in particular industries, little has been written on the general structure of different ratings industries and whether government intervention is typically needed. This paper begins that inquiry by examining the market structure of different ratings industries, and considering the circumstances under which firms that provide ratings should be regulated. The issue is particularly timely in light of recent calls to rethink the regulation of media ratings and credit ratings. We find that ratings firms in different industries share several common features. For example, most ratings firms operate in highly concentrated markets. Some factors that could make ratings markets more concentrated include economies of scale, benefits from having a single standard, and general agreement on what should be measured. We also find that most ratings firms determine their own testing standards and methods, although some industries have self-governing oversight bodies that offer their own accreditation standards. While the government regulates firm entry for a few ratings industries, this is relatively rare. The vast majority of ratings firms are unregulated. We analyze the question of regulation using an economic framework that focuses on the viability and effectiveness of a proposed policy. Despite the finding that many ratings industries are concentrated, our analysis suggests that market forces generally appear to be an effective mechanism for providing consumers and producers with useful ratings. In most cases, such markets do not require government intervention. Moreover, in industries characterized by rapid technological change the government is likely to do more harm than good by intervening. As an alternative to government regulation, voluntary industry oversight bodies may be effective in improving communication between the parties and in improving transparency in rating procedures.

    Energy efficient secured cluster based distributed fault diagnosis protocol for IoT

    Get PDF
    The rapid growth of internet and internet services provision offers wide scope to the industries to couple the various network models to design a flexible and simplified communication infrastructure. A significant attention paid towards Internet of things (IoT), from both academics and industries. Connecting and organizing of communication over wireless IoT network models are vulnerable to various security threats, due to the lack of inappropriate security deployment models. In addition to this, these models have not only security issues; they also have many performance issues. This research work deals with an IoT security over WSN model to overcome the security and performance issues by designing a Energy efficient secured cluster based distributed fault diagnosis protocol (EESCFD) Model which combines the self-fault diagnosis routing model using cluster based approach and block cipher to organize a secured data communication and to identify security fault and communication faults to improve communication efficiency. In addition we achieve an energy efficiency by employing concise block cipher which identifies the ideal size of block, size of key, number of rounds to perform the key operations in the cipher
    • 

    corecore