279 research outputs found

    Network Coding for Cooperation in Wireless Networks

    Get PDF

    Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

    Get PDF
    Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities. Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding

    Adelaide in-depth accident study 1975-1979. Part 1: An overview

    Get PDF
    This report is a general introduction to, and review of, an in-depth study of road accidents to which an ambulance was called in the metropolitan area of Adelaide, South Australia. A representative 8% sample, comprising 304 accidents, was investigated in the 12-month period commencing March 23rd 1976. The general aims of this study are presented followed by a detailed description of the sampling procedure which was adopted. The method of operation is then described, and the types of accidents investigated are presented in form of the general characteristics of the accidents and of the drivers, riders, and pedestrians, together with a review of the consequences of these accidents. The major conclusions drawn from the results of the study are described briefly, including the ways in which factors such as alcohol and inexperience affect the safety of road users, the role played by vehicle factors and aspects of the road and traffic environment in accident causation, the main causes of injury to each class of roaduser and the value of helmets and seatbelts. The companion reports on specific aspects of the accidents investigated are listed in the final section.A.J. McLean, G.K. Robinso

    Characterisation and performance analysis of random linear network coding for reliable and secure communication

    Get PDF
    In this thesis, we develop theoretical frameworks to characterize the performance of Random Linear Network Coding (RLNC), and propose novel communication schemes for the achievement of both reliability and security in wireless networks. In particular, (i) we present an analytical model to evaluate the performance of practical RLNC schemes suitable for low-complexity receivers, prioritized (i.e., layered) coding and multi-hop communications, (ii) investigate the performance of RLNC in relay assisted networks and propose a new cross-layer RLNC-aided cooperative scheme for reliable communication, (iii) characterize the secrecy feature of RLNC and propose a new physical-application layer security technique for the purpose of achieving security and reliability in multi-hope communications. At first, we investigate random block matrices and derive mathematical expressions for the enumeration of full-rank matrices that contain blocks of random entries arranged in a diagonal, lower-triangular or tri-diagonal structure. The derived expressions are then used to model the probability that a receiver will successfully decode a source message or layers of a service, when RLNC based on non-overlapping, expanding or sliding generations is employed. Moreover, the design parameters of these schemes allow to adjust the desired decoding performance. Next, we evaluate the performance of Random Linear Network Coded Cooperation (RLNCC) in relay assisted networks, and propose a cross-layer cooperative scheme which combines the emerging Non-Orthogonal Multiple Access (NOMA) technique and RLNCC. In this regard, we first consider the multiple-access relay channel in a setting where two source nodes transmit packets to a destination node, both directly and via a relay node. Secondly, we consider a multi-source multi-relay network, in which relay nodes employ RLNC on source packets and generate coded packets. For each network, we build our analysis on fundamental probability expressions for random matrices over finite fields and we derive theoretical expressions of the probability that the destination node will successfully decode the source packets. Finally, we consider a multi-relay network comprising of two groups of source nodes, where each group transmits packets to its own designated destination node over single-hop links and via a cluster of relay nodes shared by both groups. In an effort to boost reliability without sacrificing throughput, a scheme is proposed whereby packets at the relay nodes are combined using two methods; packets delivered by different groups are mixed using non-orthogonal multiple access principles, while packets originating from the same group are mixed using RLNC. An analytical framework that characterizes the performance of the proposed scheme is developed, and benchmarked against a counterpart scheme that is based on orthogonal multiple access. Finally, we quantify and characterize the intrinsic security feature of RLNC and design a joint physical-application layer security technique. For this purpose, we first consider a network comprising a transmitter, which employs RLNC to encode a message, a legitimate receiver, and a passive eavesdropper. Closed-form analytical expressions are derived to evaluate the intercept probability of RLNC, and a resource allocation model is presented to further minimize the intercept probability. Afterward, we propose a joint RLNC and opportunistic relaying scheme in a multi relay network to transmit confi- dential data to a destination in the presence of an eavesdropper. Four relay selection protocols are studied covering a range of network capabilities, such as the availability of the eavesdropper’s channel state information or the possibility to pair the selected relay with a jammer node that intentionally generates interference. For each case, expressions of the probability that a coded packet will not be decoded by a receiver, which can be either the destination or the eavesdropper, are derived. Based on those expressions, a framework is developed that characterizes the probability of the eavesdropper intercepting a sufficient number of coded packets and partially or fully decoding the confidential data. We observe that the field size over which RLNC is performed at the application layer as well as the adopted modulation and coding scheme at the physical layer can be modified to fine-tune the trade-off between security and reliability

    From Construction to Production: Enablers, Barriers and Opportunities for the Highways Supply Chain

    Get PDF
    The report presents the initial findings of a project part of the Lean Collaborative Research at Highways England with academia that aims at understanding enablers, barriers and opportunities to transform the current highways construction supply chain into a more manufacturing-like environment, where the benefits of production thinking can be achieved. The focus of the project is mostly on the adoption of off-site/modular (O/M) construction systems and advanced technologies, under a greater vision called “manufacturisation” of the highways supply chain

    Research reports: 1990 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    Reports on the research projects performed under the NASA/ASEE Summer Faculty Fellowship Program are presented. The program was conducted by The University of Alabama and MSFC during the period from June 4, 1990 through August 10, 1990. Some of the topics covered include: (1) Space Shuttles; (2) Space Station Freedom; (3) information systems; (4) materials and processes; (4) Space Shuttle main engine; (5) aerospace sciences; (6) mathematical models; (7) mission operations; (8) systems analysis and integration; (9) systems control; (10) structures and dynamics; (11) aerospace safety; and (12) remote sensin

    A protocol design paradigm for rateless fulcrum code

    Get PDF
    Establecer servicios Multicast eficientes en una red con dispositivos heterogéneos y bajo los efectos de un canal con efecto de borradura es una de las prioridades actuales en la teoría de la codificación, en particular en Network Coding (NC). Además, el creciente número de clientes con dispositivos móviles de gran capacidad de procesamiento y la prevalencia de tráfico no tolerante al retardo han provocado una demanda de esquemas Multicast sin realimentación en lo que respecta a la gestión de recursos distribuidos. Las plataformas de comunicación actuales carecen de un control de codificación gradual y dinámico basado en el tipo de datos que se transmiten a nivel de la capa de aplicación. Este trabajo propone un esquema de transmisión fiable y eficiente basado en una codificación hibrida compuesta por una codificación sistemática y codificación de red lineal aleatoria (RLNC) denominada codificación Fulcrum. Este esquema híbrido de codificación distribuida tipo Rateless permite implementar un sistema adaptativo de gestión de recursos para aumentar la probabilidad de descodificación durante la recepción de datos en cada nodo receptor de la información. En última instancia, el esquema propuesto se traduce en un mayor rendimiento de la red y en tiempos de transmisión (RTT) mucho más cortos mediante la implementación eficiente de una corrección de errores hacia delante (FEC).DoctoradoDoctor en Ingeniería de Sistemas y Computació

    FinBook: literary content as digital commodity

    Get PDF
    This short essay explains the significance of the FinBook intervention, and invites the reader to participate. We have associated each chapter within this book with a financial robot (FinBot), and created a market whereby book content will be traded with financial securities. As human labour increasingly consists of unstable and uncertain work practices and as algorithms replace people on the virtual trading floors of the worlds markets, we see members of society taking advantage of FinBots to invest and make extra funds. Bots of all kinds are making financial decisions for us, searching online on our behalf to help us invest, to consume products and services. Our contribution to this compilation is to turn the collection of chapters in this book into a dynamic investment portfolio, and thereby play out what might happen to the process of buying and consuming literature in the not-so-distant future. By attaching identities (through QR codes) to each chapter, we create a market in which the chapter can ‘perform’. Our FinBots will trade based on features extracted from the authors’ words in this book: the political, ethical and cultural values embedded in the work, and the extent to which the FinBots share authors’ concerns; and the performance of chapters amongst those human and non-human actors that make up the market, and readership. In short, the FinBook model turns our work and the work of our co-authors into an investment portfolio, mediated by the market and the attention of readers. By creating a digital economy specifically around the content of online texts, our chapter and the FinBook platform aims to challenge the reader to consider how their personal values align them with individual articles, and how these become contested as they perform different value judgements about the financial performance of each chapter and the book as a whole. At the same time, by introducing ‘autonomous’ trading bots, we also explore the different ‘network’ affordances that differ between paper based books that’s scarcity is developed through analogue form, and digital forms of books whose uniqueness is reached through encryption. We thereby speak to wider questions about the conditions of an aggressive market in which algorithms subject cultural and intellectual items – books – to economic parameters, and the increasing ubiquity of data bots as actors in our social, political, economic and cultural lives. We understand that our marketization of literature may be an uncomfortable juxtaposition against the conventionally-imagined way a book is created, enjoyed and shared: it is intended to be

    Planetary Data Workshop, Part 1

    Get PDF
    The community of planetary scientists addresses two general problems regarding planetary science data: (1) important data sets are being permanently lost; and (2) utilization is constrainted by difficulties in locating and accessing science data and supporting information necessary for its use. A means to correct the problems, provide science and functional requirements for a systematic and phased approach, and suggest technologies and standards appropriate to the solution were explored
    corecore