38 research outputs found

    Enhancement of precise underwater object localization

    Get PDF
    Underwater communication applications extensively use localization services for object identification. Because of their significant impact on ocean exploration and monitoring, underwater wireless sensor networks (UWSN) are becoming increasingly popular, and acoustic communications have largely overtaken radio frequency (RF) broadcasts as the dominant means of communication. The two localization methods that are most frequently employed are those that estimate the angle of arrival (AOA) and the time difference of arrival (TDoA). The military and civilian sectors rely heavily on UWSN for object identification in the underwater environment. As a result, there is a need in UWSN for an accurate localization technique that accounts for dynamic nature of the underwater environment. Time and position data are the two key parameters to accurately define the position of an object. Moreover, due to climate change there is now a need to constrain energy consumption by UWSN to limit carbon emission to meet net-zero target by 2050. To meet these challenges, we have developed an efficient localization algorithm for determining an object position based on the angle and distance of arrival of beacon signals. We have considered the factors like sensor nodes not being in time sync with each other and the fact that the speed of sound varies in water. Our simulation results show that the proposed approach can achieve great localization accuracy while accounting for temporal synchronization inaccuracies. When compared to existing localization approaches, the mean estimation error (MEE) and energy consumption figures, the proposed approach outperforms them. The MEEs is shown to vary between 84.2154m and 93.8275m for four trials, 61.2256m and 92.7956m for eight trials, and 42.6584m and 119.5228m for twelve trials. Comparatively, the distance-based measurements show higher accuracy than the angle-based measurements

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Cross-Layer Design of Sequential Detectors in Sensor Networks

    Full text link

    Non-Radiative Calibration of Active Antenna Arrays

    Get PDF
    Antenna arrays offer significant benefits for modern wireless communication systems but they remain difficult and expensive to produce. One of the impediments of utilising them is to maintain knowledge of the precise amplitude and phase relationships between the elements of the array, which are sensitive to errors particularly when each element of the array is connected to its own transceiver. These errors arise from multiple sources such as manufacturing errors, mutual coupling between the elements, thermal effects, component aging and element location errors. The calibration problem of antenna arrays is primarily the identification of the amplitude and phase mismatch, and then using this information for correction. This thesis will present a novel measurement-based calibration approach, which uses a fixed structure allowing each element of the array to be measured. The measurement structure is based around multiple sensors, which are interleaved with the elements of the array to provide a scalable structure that provides multiple measurement paths to almost all of the elements of the array. This structure is utilised by comparison based calibration algorithms, so that each element of the array can be calibrated while mitigating the impact of the additional measurement hardware on the calibration accuracy. The calibration was proven in the investigation of the experimental test-bed, which represented a typical telecommunications basestation. Calibration accuracies of ±0.5dB and 5o were achieved for all but one amplitude outlier of 0.55dB. The performance is only limited by the quality of the coupler design. This calibration approach has also been demonstrated for wideband signal calibration

    Wireless communication, sensing, and REM: A security perspective

    Get PDF
    The diverse requirements of next-generation communication systems necessitate awareness, flexibility, and intelligence as essential building blocks of future wireless networks. The awareness can be obtained from the radio signals in the environment using wireless sensing and radio environment mapping (REM) methods. This is, however, accompanied by threats such as eavesdropping, manipulation, and disruption posed by malicious attackers. To this end, this work analyzes the wireless sensing and radio environment awareness mechanisms, highlighting their vulnerabilities and provides solutions for mitigating them. As an example, the different threats to REM and its consequences in a vehicular communication scenario are described. Furthermore, the use of REM for securing communications is discussed and future directions regarding sensing/REM security are highlighted

    one6G white paper, 6G technology overview:Second Edition, November 2022

    Get PDF
    6G is supposed to address the demands for consumption of mobile networking services in 2030 and beyond. These are characterized by a variety of diverse, often conflicting requirements, from technical ones such as extremely high data rates, unprecedented scale of communicating devices, high coverage, low communicating latency, flexibility of extension, etc., to non-technical ones such as enabling sustainable growth of the society as a whole, e.g., through energy efficiency of deployed networks. On the one hand, 6G is expected to fulfil all these individual requirements, extending thus the limits set by the previous generations of mobile networks (e.g., ten times lower latencies, or hundred times higher data rates than in 5G). On the other hand, 6G should also enable use cases characterized by combinations of these requirements never seen before, e.g., both extremely high data rates and extremely low communication latency). In this white paper, we give an overview of the key enabling technologies that constitute the pillars for the evolution towards 6G. They include: terahertz frequencies (Section 1), 6G radio access (Section 2), next generation MIMO (Section 3), integrated sensing and communication (Section 4), distributed and federated artificial intelligence (Section 5), intelligent user plane (Section 6) and flexible programmable infrastructures (Section 7). For each enabling technology, we first give the background on how and why the technology is relevant to 6G, backed up by a number of relevant use cases. After that, we describe the technology in detail, outline the key problems and difficulties, and give a comprehensive overview of the state of the art in that technology. 6G is, however, not limited to these seven technologies. They merely present our current understanding of the technological environment in which 6G is being born. Future versions of this white paper may include other relevant technologies too, as well as discuss how these technologies can be glued together in a coherent system

    Design of large polyphase filters in the Quadratic Residue Number System

    Full text link
    corecore