3,061 research outputs found

    Effects of Spatial Randomness on Locating a Point Source with Distributed Sensors

    Full text link
    Most studies that consider the problem of estimating the location of a point source in wireless sensor networks assume that the source location is estimated by a set of spatially distributed sensors, whose locations are fixed. Motivated by the fact that the observation quality and performance of the localization algorithm depend on the location of the sensors, which could be randomly distributed, this paper investigates the performance of a recently proposed energy-based source-localization algorithm under the assumption that the sensors are positioned according to a uniform clustering process. Practical considerations such as the existence and size of the exclusion zones around each sensor and the source will be studied. By introducing a novel performance measure called the estimation outage, it will be shown how parameters related to the network geometry such as the distance between the source and the closest sensor to it as well as the number of sensors within a region surrounding the source affect the localization performance.Comment: 7 Pages, 5 Figures, To appear at the 2014 IEEE International Conference on Communications (ICC'14) Workshop on Advances in Network Localization and Navigation (ANLN), Invited Pape

    Determining Point of Burst of Artillery Shells using Acoustic Source Localisation

    Get PDF
    Source localisation is a method to estimate position of a source. In case of acoustic source localisation (ASL), the location of sound source is estimated using acoustic sensors such as a microphone. In case of ASL, time difference of arrival (TDOA) from each pair of microphones is estimated. For any pair of microphones, the surface on which the TDOA is constant is a hyperboloid of two sheets. Then the source location is estimated at the point where all associated hyperboloids most nearly intersect. This concept has been used in our range in finding the point-of-burst of artillery shell using an array of sensors. In this paper, a simulation model has been developed to examine the applicability of acoustic source localisation for determining point-of-burst of artillery shells. The randomness in the model has been incorporated in terms of gustiness of downrange sea wind. The result of the simulation has been validated with trajectory data of projectiles tracked by radar. Finally, an acoustic sensor array-based setup has been developed and used for localising point-of-bursts.Science Journal, Vol. 64, No. 6, November 2014, pp.517-523, DOI:http://dx.doi.org/10.14429/dsj.64.811

    Nearfield Acoustic Holography using sparsity and compressive sampling principles

    Get PDF
    Regularization of the inverse problem is a complex issue when using Near-field Acoustic Holography (NAH) techniques to identify the vibrating sources. This paper shows that, for convex homogeneous plates with arbitrary boundary conditions, new regularization schemes can be developed, based on the sparsity of the normal velocity of the plate in a well-designed basis, i.e. the possibility to approximate it as a weighted sum of few elementary basis functions. In particular, these new techniques can handle discontinuities of the velocity field at the boundaries, which can be problematic with standard techniques. This comes at the cost of a higher computational complexity to solve the associated optimization problem, though it remains easily tractable with out-of-the-box software. Furthermore, this sparsity framework allows us to take advantage of the concept of Compressive Sampling: under some conditions on the sampling process (here, the design of a random array, which can be numerically and experimentally validated), it is possible to reconstruct the sparse signals with significantly less measurements (i.e., microphones) than classically required. After introducing the different concepts, this paper presents numerical and experimental results of NAH with two plate geometries, and compares the advantages and limitations of these sparsity-based techniques over standard Tikhonov regularization.Comment: Journal of the Acoustical Society of America (2012

    Locating Sensors for Detecting Source-to-Target Patterns of Special Nuclear Material Smuggling: A Spatial Information Theoretic Approach

    Get PDF
    In this paper, a spatial information-theoretic model is proposed to locate sensors for detecting source-to-target patterns of special nuclear material (SNM) smuggling. In order to ship the nuclear materials from a source location with SNM production to a target city, the smugglers must employ global and domestic logistics systems. This paper focuses on locating a limited set of fixed and mobile radiation sensors in a transportation network, with the intent to maximize the expected information gain and minimize the estimation error for the subsequent nuclear material detection stage. A Kalman filtering-based framework is adapted to assist the decision-maker in quantifying the network-wide information gain and SNM flow estimation accuracy

    Compressive Matched-Field Processing

    Full text link
    Source localization by matched-field processing (MFP) generally involves solving a number of computationally intensive partial differential equations. This paper introduces a technique that mitigates this computational workload by "compressing" these computations. Drawing on key concepts from the recently developed field of compressed sensing, it shows how a low-dimensional proxy for the Green's function can be constructed by backpropagating a small set of random receiver vectors. Then, the source can be located by performing a number of "short" correlations between this proxy and the projection of the recorded acoustic data in the compressed space. Numerical experiments in a Pekeris ocean waveguide are presented which demonstrate that this compressed version of MFP is as effective as traditional MFP even when the compression is significant. The results are particularly promising in the broadband regime where using as few as two random backpropagations per frequency performs almost as well as the traditional broadband MFP, but with the added benefit of generic applicability. That is, the computationally intensive backpropagations may be computed offline independently from the received signals, and may be reused to locate any source within the search grid area

    Distributed Detection and Estimation in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) are typically formed by a large number of densely deployed, spatially distributed sensors with limited sensing, computing, and communication capabilities that cooperate with each other to achieve a common goal. In this dissertation, we investigate the problem of distributed detection, classification, estimation, and localization in WSNs. In this context, the sensors observe the conditions of their surrounding environment, locally process their noisy observations, and send the processed data to a central entity, known as the fusion center (FC), through parallel communication channels corrupted by fading and additive noise. The FC will then combine the received information from the sensors to make a global inference about the underlying phenomenon, which can be either the detection or classification of a discrete variable or the estimation of a continuous one.;In the domain of distributed detection and classification, we propose a novel scheme that enables the FC to make a multi-hypothesis classification of an underlying hypothesis using only binary detections of spatially distributed sensors. This goal is achieved by exploiting the relationship between the influence fields characterizing different hypotheses and the accumulated noisy versions of local binary decisions as received by the FC, where the influence field of a hypothesis is defined as the spatial region in its surrounding in which it can be sensed using some sensing modality. In the realm of distributed estimation and localization, we make four main contributions: (a) We first formulate a general framework that estimates a vector of parameters associated with a deterministic function using spatially distributed noisy samples of the function for both analog and digital local processing schemes. ( b) We consider the estimation of a scalar, random signal at the FC and derive an optimal power-allocation scheme that assigns the optimal local amplification gains to the sensors performing analog local processing. The objective of this optimized power allocation is to minimize the L 2-norm of the vector of local transmission powers, given a maximum estimation distortion at the FC. We also propose a variant of this scheme that uses a limited-feedback strategy to eliminate the requirement of perfect feedback of the instantaneous channel fading coefficients from the FC to local sensors through infinite-rate, error-free links. ( c) We propose a linear spatial collaboration scheme in which sensors collaborate with each other by sharing their local noisy observations. We derive the optimal set of coefficients used to form linear combinations of the shared noisy observations at local sensors to minimize the total estimation distortion at the FC, given a constraint on the maximum average cumulative transmission power in the entire network. (d) Using a novel performance measure called the estimation outage, we analyze the effects of the spatial randomness of the location of the sensors on the quality and performance of localization algorithms by considering an energy-based source-localization scheme under the assumption that the sensors are positioned according to a uniform clustering process

    Structural Health Monitoring and Damage Identification of Bridges Using Triaxial Geophones and Time Series Analysis

    Get PDF
    This study uses the vibration data of two full-scale bridges, subjected to controlled damage, along the I-40 west, near downtown Knoxville, TN, to evaluate the feasibility of time series-based damage identification techniques for structural health monitoring. The vibration data was acquired for the entrance ramp to James White Parkway from I-40 westbound, and the I-40 westbound bridge over 4th Avenue, before the bridges were demolished during I-40 expansion project called Smartfix40. The vibration data was recorded using an array of triaxial geophones, highly sensitive sensors to record vibrations, in healthy and damaged conditions of the bridges. The vibration data is evaluated using linear stationary time series models to extract damage sensitive-features (DSFs) which are used to identify the condition of bridge. Two time series-based damage identification techniques are used and developed in this study. In the first technique, the vibration data is corrected for sensor transfer function suitable for given geophone type and then convolved with random values to create input for autoregressive (AR) time series models. A two-stage prediction model, combined AR and autoregressive with exogenous input (ARX), is employed to obtain DSFs. An outlier analysis method based on DSF values is used to detect the damage. The technique is evaluated using the vertical vibration data of the two bridges subjected to three controlled amounts of known damage on the steel girders. In the second technique, ARX models and sensor clustering technique is used to obtain prediction errors in healthy and damaged conditions of the bridges. DSF is defined as the ratio of the standard deviations of the prediction errors. The proposed technique is evaluated using the triaxial vibration data of the two bridges. This study also presents finite element analysis of the I-40 westbound bridge over 4th Avenue to obtain simulated vibration data for different damage levels and locations. The simulated data are then used in the ARX models and sensor clustering damage identification technique to investigate the effects of damage location and extent, efficacy of each triaxial vibration, and effect of noise on the vibration-based damage identification techniques
    • …
    corecore