112 research outputs found

    On Detection and Ranking Methods for a Distributed Radio-Frequency Sensor Network: Theory and Algorithmic Implementation

    Get PDF
    A theoretical foundation for pre-detection fusion of sensors is needed if the United States Air Force is to ever field a system of distributed and layered sensors that can detect and perform parameter estimation of complex, extended targets in difficult interference environments, without human intervention, in near real-time. This research is relevant to the United States Air Force within its layered sensing and cognitive radar/sensor initiatives. The asymmetric threat of the twenty-first century introduces stressing sensing conditions that may exceed the ability of traditional monostatic sensing systems to perform their required intelligence, surveillance and reconnaissance missions. In particular, there is growing interest within the United States Air Force to move beyond single sensor sensing systems, and instead begin fielding and leveraging distributed sensing systems to overcome the inherent challenges imposed by the modern threat space. This thesis seeks to analyze the impact of integrating target echoes in the angular domain, to determine if better detection and ranking performance is achieved through the use of a distributed sensor network. Bespoke algorithms are introduced for detection and ranking ISR missions leveraging a distributed network of radio-frequency sensors: the first set of bespoke algorithms area based upon a depth-based nonparametric detection algorithm, which is to shown to enhance the recovery of targets under lower signal-to-noise ratios than an equivalent monostatic radar system; the second set of bespoke algorithms are based upon random matrix theoretic and concentration of measure mathematics, and demonstrated to outperform the depth-based nonparametric approach. This latter approach shall be shown to be effective across a broad range of signal-to-noise ratios, both positive and negative

    Improving reconstructions of digital holograms

    Get PDF
    Digital holography is a two step process of recording a hologram on an electronic sensor and reconstructing it numerically. This thesis makes a number of contri- butions to the second step of this process. These can be split into two distinct parts: A) speckle reduction in reconstructions of digital holograms (DHs), and B) modeling and overcoming partial occlusion e®ects in reconstructions of DHs, and using occlusions to reduce the effects of the twin image in reconstructions of DHs. Part A represents the major part of this thesis. Speckle reduction forms an important step in many digital holographic applications and we have developed a number of techniques that can be used to reduce its corruptive effect in recon- structions of DHs. These techniques range from 3D filtering of DH reconstructions to a technique that filters in the Fourier domain of the reconstructed DH. We have also investigated the most commonly used industrial speckle reduction technique - wavelet filters. In Part B, we investigate the nature of opaque and non-opaque partial occlusions. We motivate this work by trying to ¯nd a subset of pixels that overcome the effects of a partial occlusion, thus revealing otherwise hidden features on an object captured using digital holography. Finally, we have used an occlusion at the twin image plane to completely remove the corrupting effect of the out-of-focus twin image on reconstructions of DHs

    Digital Hologram Image Processing

    Get PDF
    In this thesis we discuss and examine the contributions we have made to the field of digital hologram image processing. In particular, we will deal with the processing of numerical reconstructions of real-world three-dimensional macroscopic objects recorded by in-line digital holography. Our selection of in-line digital holography over off-axis digital holography is based primarily on resolution. There is evidence that an off-axis architecture requires approximately four times the resolution to record a hologram than an in-line architecture. The high resolution of holographic film means this is acceptable in optical holography. However, in digital holography the bandwidth of the recording medium is already severely limited and if we are to extract information from reconstructions we need the highest possible resolution which, if one cannot harness the functionality of accurately reconstructing phase, is achieved through using an in-line architecture. Two of the most significant problems encountered with reconstructions of in-line digital holograms include the small depth-of-field of each reconstruction and corruptive influence of the unwanted twin-image. This small depth-of-field makes it difficult to accurately process the numerical reconstructions and it is in this shortcoming that we will make our first three contributions: focusing algorithms, background and object segmentation algorithms and algorithms to create a single image where all object regions are in focus. Using a combination of our focusing algorithms and our background segmentation algorithm, we will make our fourth contribution: a rapid twin-image reduction algorithm for in-line digital holography. We believe that our techniques would be applicable to all digital holographic objects, in particular its relevant to objects where phase unwrapping is not an option. We demonstrate the usefulness of the algorithms for a range of macroscopic objects with varying texture and contrast

    Digital Hologram Image Processing

    Get PDF
    In this thesis we discuss and examine the contributions we have made to the field of digital hologram image processing. In particular, we will deal with the processing of numerical reconstructions of real-world three-dimensional macroscopic objects recorded by in-line digital holography. Our selection of in-line digital holography over off-axis digital holography is based primarily on resolution. There is evidence that an off-axis architecture requires approximately four times the resolution to record a hologram than an in-line architecture. The high resolution of holographic film means this is acceptable in optical holography. However, in digital holography the bandwidth of the recording medium is already severely limited and if we are to extract information from reconstructions we need the highest possible resolution which, if one cannot harness the functionality of accurately reconstructing phase, is achieved through using an in-line architecture. Two of the most significant problems encountered with reconstructions of in-line digital holograms include the small depth-of-field of each reconstruction and corruptive influence of the unwanted twin-image. This small depth-of-field makes it difficult to accurately process the numerical reconstructions and it is in this shortcoming that we will make our first three contributions: focusing algorithms, background and object segmentation algorithms and algorithms to create a single image where all object regions are in focus. Using a combination of our focusing algorithms and our background segmentation algorithm, we will make our fourth contribution: a rapid twin-image reduction algorithm for in-line digital holography. We believe that our techniques would be applicable to all digital holographic objects, in particular its relevant to objects where phase unwrapping is not an option. We demonstrate the usefulness of the algorithms for a range of macroscopic objects with varying texture and contrast

    The autocorrelated Bayesian sampler : a rational process for probability judgments, estimates, confidence intervals, choices, confidence judgments, and response times

    Get PDF
    Normative models of decision-making that optimally transform noisy (sensory) information into categorical decisions qualitatively mismatch human behavior. Indeed, leading computational models have only achieved high empirical corroboration by adding task-specific assumptions that deviate from normative principles. In response, we offer a Bayesian approach that implicitly produces a posterior distribution of possible answers (hypotheses) in response to sensory information. But we assume that the brain has no direct access to this posterior, but can only sample hypotheses according to their posterior probabilities. Accordingly, we argue that the primary problem of normative concern in decision-making is integrating stochastic hypotheses, rather than stochastic sensory information, to make categorical decisions. This implies that human response variability arises mainly from posterior sampling rather than sensory noise. Because human hypothesis generation is serially correlated, hypothesis samples will be autocorrelated. Guided by this new problem formulation, we develop a new process, the Autocorrelated Bayesian Sampler (ABS), which grounds autocorrelated hypothesis generation in a sophisticated sampling algorithm. The ABS provides a single mechanism that qualitatively explains many empirical effects of probability judgments, estimates, confidence intervals, choice, confidence judgments, response times, and their relationships. Our analysis demonstrates the unifying power of a perspective shift in the exploration of normative models. It also exemplifies the proposal that the “Bayesian brain” operates using samples not probabilities, and that variability in human behavior may primarily reflect computational rather than sensory noise

    Least-Squares Based Adaptive Source Localization with Biomedical Applications

    Get PDF
    In this thesis, we study certain aspects of signal source/target localization by sensory agents and their biomedical applications. We first focus on a generic distance measurement based problem: Estimation of the location of a signal source by a sensory agent equiped with a distance measurement unit or a team of such a sensory agent. This problem was addressed in some recent studies using a gradient based adaptive algorithm. In this study, we design a least-squares based adaptive algorithm with forgetting factor for the same task. Besides its mathematical background, we perform some simulations for both stationary and drifting target cases. The least-squares based algorithm we propose bears the same asymptotic stability and convergence properties as the gradient algorithm previously studied. It is further demonstrated via simulation studies that the proposed least-squares algorithm converges significantly faster to the resultant location estimates than the gradient algorithm for high values of the forgetting factor, and significantly reduces the noise effects for small values of the forgetting factor. We also focus on the problem of localizing a medical device/implant in human body by a mobile sensor unit (MSU) using distance measurements. As the particular distance measurement method, time of flight (TOF) based approach involving ultra wide-band signals is used, noting the important effects of the medium characteristics on this measurement method. Since human body consists of different organs and tissues, each with a different signal permittivity coefficient and hence a different signal propagation speed, one cannot assume a constant signal propagation speed environment for the aforementioned medical localization problem. Furthermore, the propagation speed is unknown. Considering all the above factors and utilizing a TOF based distance measurement mechanism, we use the proposed adaptive least-square algorithm to estimate the 3-D location of a medical device/implant in the human body. In the design of the adaptive algorithm, we first derive a linear parametric model with the unknown 3-D coordinates of the device/implant and the current signal propagation speed of the medium as its parameters. Then, based on this parametric model, we design the proposed adaptive algorithm, which uses the measured 3-D position of the MSU and the measured TOF as regressor signals. After providing a formal analysis of convergence properties of the proposed localization algorithm, we implement numerical tests to analyze the properties of the localization algorithm, considering two types of scenarios: (1) A priori information regarding the region, e.g quadrant (among upper-left, upper-right, lower-left, lower-right of the human body), of the implant location is available and (2) such a priori information is not available. In (1), assuming knowledge of fixed average relative permittivity for each region, we established that the proposed algorithm converges to an estimate with zero estimation error. Moreover, different white Gaussian noises are added to emulate the TOF measurement disturbances, and it is observed that the proposed algorithm is robust to such noises/disturbances. In (2), although perfect estimation is not achieved, the estimation error is at a low admissible level. In addition, for both cases (1) and (2), forgetting factor effects have been investigated and results show that use of small forgetting factor values reduces noise effects significantly, while use of high forgetting factor values speeds up convergence of the estimation

    Least-Squares Based Adaptive Source Localization with Biomedical Applications

    Get PDF
    In this thesis, we study certain aspects of signal source/target localization by sensory agents and their biomedical applications. We first focus on a generic distance measurement based problem: Estimation of the location of a signal source by a sensory agent equiped with a distance measurement unit or a team of such a sensory agent. This problem was addressed in some recent studies using a gradient based adaptive algorithm. In this study, we design a least-squares based adaptive algorithm with forgetting factor for the same task. Besides its mathematical background, we perform some simulations for both stationary and drifting target cases. The least-squares based algorithm we propose bears the same asymptotic stability and convergence properties as the gradient algorithm previously studied. It is further demonstrated via simulation studies that the proposed least-squares algorithm converges significantly faster to the resultant location estimates than the gradient algorithm for high values of the forgetting factor, and significantly reduces the noise effects for small values of the forgetting factor. We also focus on the problem of localizing a medical device/implant in human body by a mobile sensor unit (MSU) using distance measurements. As the particular distance measurement method, time of flight (TOF) based approach involving ultra wide-band signals is used, noting the important effects of the medium characteristics on this measurement method. Since human body consists of different organs and tissues, each with a different signal permittivity coefficient and hence a different signal propagation speed, one cannot assume a constant signal propagation speed environment for the aforementioned medical localization problem. Furthermore, the propagation speed is unknown. Considering all the above factors and utilizing a TOF based distance measurement mechanism, we use the proposed adaptive least-square algorithm to estimate the 3-D location of a medical device/implant in the human body. In the design of the adaptive algorithm, we first derive a linear parametric model with the unknown 3-D coordinates of the device/implant and the current signal propagation speed of the medium as its parameters. Then, based on this parametric model, we design the proposed adaptive algorithm, which uses the measured 3-D position of the MSU and the measured TOF as regressor signals. After providing a formal analysis of convergence properties of the proposed localization algorithm, we implement numerical tests to analyze the properties of the localization algorithm, considering two types of scenarios: (1) A priori information regarding the region, e.g quadrant (among upper-left, upper-right, lower-left, lower-right of the human body), of the implant location is available and (2) such a priori information is not available. In (1), assuming knowledge of fixed average relative permittivity for each region, we established that the proposed algorithm converges to an estimate with zero estimation error. Moreover, different white Gaussian noises are added to emulate the TOF measurement disturbances, and it is observed that the proposed algorithm is robust to such noises/disturbances. In (2), although perfect estimation is not achieved, the estimation error is at a low admissible level. In addition, for both cases (1) and (2), forgetting factor effects have been investigated and results show that use of small forgetting factor values reduces noise effects significantly, while use of high forgetting factor values speeds up convergence of the estimation

    Occlusion handling in video surveillance systems

    Get PDF

    Camera Spatial Frequency Response Derived from Pictorial Natural Scenes

    Get PDF
    Camera system performance is a prominent part of many aspects of imaging science and computer vision. There are many aspects to camera performance that determines how accurately the image represents the scene, including measurements of colour accuracy, tone reproduction, geometric distortions, and image noise evaluation. The research conducted in this thesis focuses on the Modulation Transfer Function (MTF), a widely used camera performance measurement employed to describe resolution and sharpness. Traditionally measured under controlled conditions with characterised test charts, the MTF is a measurement restricted to laboratory settings. The MTF is based on linear system theory, meaning the input to output must follow a straightforward correlation. Established methods for measuring the camera system MTF include the ISO12233:2017 for measuring the edge-based Spatial Frequency Response (e-SFR), a sister measure of the MTF designed for measuring discrete systems. Many modern camera systems incorporate non-linear, highly adaptive image signal processing (ISP) to improve image quality. As a result, system performance becomes scene and processing dependant, adapting to the scene contents captured by the camera. Established test chart based MTF/SFR methods do not describe this adaptive nature; they only provide the response of the camera to a test chart signal. Further, with the increased use of Deep Neural Networks (DNN) for image recognition tasks and autonomous vision systems, there is an increased need for monitoring system performance outside laboratory conditions in real-time, i.e. live-MTF. Such measurements would assist in monitoring the camera systems to ensure they are fully operational for decision critical tasks. This thesis presents research conducted to develop a novel automated methodology that estimates the standard e-SFR directly from pictorial natural scenes. This methodology has the potential to produce scene dependant and real-time camera system performance measurements, opening new possibilities in imaging science and allowing live monitoring/calibration of systems for autonomous computer vision applications. The proposed methodology incorporates many well-established image processes, as well as others developed for specific purposes. It is presented in two parts. Firstly, the Natural Scene derived SFR (NS-SFR) are obtained from isolated captured scene step-edges, after verifying that these edges have the correct profile for implementing into the slanted-edge algorithm. The resulting NS-SFRs are shown to be a function of both camera system performance and scene contents. The second part of the methodology uses a series of derived NS-SFRs to estimate the system e-SFR, as per the ISO12233 standard. This is achieved by applying a sequence of thresholds to segment the most likely data corresponding to the system performance. These thresholds a) group the expected optical performance variation across the imaging circle within radial distance segments, b) obtain the highest performance NS-SFRs per segment and c) select the NS-SFRs with input edge and region of interest (ROI) parameter ranges shown to introduce minimal e-SFR variation. The selected NS-SFRs are averaged per radial segment to estimate system e-SFRs across the field of view. A weighted average of these estimates provides an overall system performance estimation. This methodology is implemented for e-SFR estimation of three characterised camera systems, two near-linear and one highly non-linear. Investigations are conducted using large, diverse image datasets as well as restricting scene content and the number of images used for the estimation. The resulting estimates are comparable to ISO12233 e-SFRs derived from test chart inputs for the near-linear systems. Overall estimate stays within one standard deviation of the equivalent test chart measurement. Results from the highly non-linear system indicate scene and processing dependency, potentially leading to a more representative SFR measure than the current chart-based approaches for such systems. These results suggest that the proposed method is a viable alternative to the ISO technique
    • …
    corecore