333 research outputs found

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Compressive Sensing Applications in Measurement: Theoretical issues, algorithm characterization and implementation

    Get PDF
    At its core, signal acquisition is concerned with efficient algorithms and protocols capable to capture and encode the signal information content. For over five decades, the indisputable theoretical benchmark has been represented by the wellknown Shannon’s sampling theorem, and the corresponding notion of information has been indissolubly related to signal spectral bandwidth. The contemporary society is founded on almost instantaneous exchange of information, which is mainly conveyed in a digital format. Accordingly, modern communication devices are expected to cope with huge amounts of data, in a typical sequence of steps which comprise acquisition, processing and storage. Despite the continual technological progress, the conventional acquisition protocol has come under mounting pressure and requires a computational effort not related to the actual signal information content. In recent years, a novel sensing paradigm, also known as Compressive Sensing, briefly CS, is quickly spreading among several branches of Information Theory. It relies on two main principles: signal sparsity and incoherent sampling, and employs them to acquire the signal directly in a condensed form. The sampling rate is related to signal information rate, rather than to signal spectral bandwidth. Given a sparse signal, its information content can be recovered even fromwhat could appear to be an incomplete set of measurements, at the expense of a greater computational effort at reconstruction stage. My Ph.D. thesis builds on the field of Compressive Sensing and illustrates how sparsity and incoherence properties can be exploited to design efficient sensing strategies, or to intimately understand the sources of uncertainty that affect measurements. The research activity has dealtwith both theoretical and practical issues, inferred frommeasurement application contexts, ranging fromradio frequency communications to synchrophasor estimation and neurological activity investigation. The thesis is organised in four chapters whose key contributions include: • definition of a general mathematical model for sparse signal acquisition systems, with particular focus on sparsity and incoherence implications; • characterization of the main algorithmic families for recovering sparse signals from reduced set of measurements, with particular focus on the impact of additive noise; • implementation and experimental validation of a CS-based algorithmfor providing accurate preliminary information and suitably preprocessed data for a vector signal analyser or a cognitive radio application; • design and characterization of a CS-based super-resolution technique for spectral analysis in the discrete Fourier transform(DFT) domain; • definition of an overcomplete dictionary which explicitly account for spectral leakage effect; • insight into the so-called off-the-grid estimation approach, by properly combining CS-based super-resolution and DFT coefficients polar interpolation; • exploration and analysis of sparsity implications in quasi-stationary operative conditions, emphasizing the importance of time-varying sparse signal models; • definition of an enhanced spectral content model for spectral analysis applications in dynamic conditions by means of Taylor-Fourier transform (TFT) approaches

    Phase extraction of non-stationary signals produced in dynamic interferometry involving speckle waves

    Get PDF
    It is now widely acknowledged, among communities of researchers and engineers of very different horizons, that speckle interferometry (SI) offers powerful techniques to characterize mechanical rough surfaces with a submicronic accuracy in static or quasi-static regime, when small displacements are involved (typically several microns or tens of microns). The issue of dynamic regimes with possibly large deformations (typically several hundreds of microns) is still topical and prevents an even more widespread use of speckle techniques. This is essentially due to the lack of efficient processing schemes able to cope with non-stationary AM-FM interferometric signals. In addition, decorrelation-induced phase errors represent an hindrance to accurate measurement when such large displacements and classical fringe analysis techniques are considered. This work is an attempt to address those issues and to endeavor to make the most of speckle interferometry signals. Our answers to those problems are located on two different levels. First of all, we adopt the temporal analysis approach, i.e. the analysis of the temporal signal of each pixel of the sensor area used to record the interferograms. A return to basics of phase extraction is operated to properly identify the conditions under which the computed phase is meaningful and thus give some insight on the physical phenomenon under analysis. Due to their intrinsic non-stationary nature, a preprocessing tool is missing to put the SI temporal signals in a shape which ensures an accurate phase computation, whichever technique is chosen. This is where the Empirical Mode Decomposition (EMD) intervenes. This technique, somehow equivalent to an adaptive filtering technique, has been studied and tailored to fit with our expectations. The EMD has shown a great ability to remove efficiently the random fluctuating background intensity and to evaluate the modulation intensity. The Hilbert tranform (HT) is the natural quadrature operator. Its use to build an analytical signal from the so-detrended SI signal, for subsequent phase computation, has been studied and assessed. Other phase extraction techniques have been considered as well for comparison purposes. Finally, our answer to the decorrelation-induced phase error relies on the well-known result that the higher the pixel modulation intensity, the lower the random phase error. We took benefit from this result – not only linked to basic SNR considerations, but more specifically to the intrinsic phase structure of speckle fields – with a novel approach. The regions within the pixel signal history classified as unreliable because under-modulated, are purely and simply discarded. An interpolation step with the Delaunay triangulation is carried out with the so-obtained non-uniformly sampled phase maps to recover a smooth phase which relies on the most reliable available data. Our schemes have been tested and discussed with simulated and experimental SI signals. We eventually have developed a versatile, accurate and efficient phase extraction procedure, perfectly able to tackle the challenge of dynamic behaviors characterization, even for displacements and/or deformations beyond the classical limit of the correlation dimensions

    Interference Mitigation and Localization Based on Time-Frequency Analysis for Navigation Satellite Systems

    Get PDF
    Interference Mitigation and Localization Based on Time-Frequency Analysis for Navigation Satellite SystemsNowadays, the operation of global navigation satellite systems (GNSS) is imperative across a multitude of applications worldwide. The increasing reliance on accurate positioning and timing information has made more serious than ever the consequences of possible service outages in the satellite navigation systems. Among others, interference is regarded as the primary threat to their operation. Due the recent proliferation of portable interferers, notably jammers, it has now become common for GNSS receivers to endure simultaneous attacks from multiple sources of interference, which are likely spatially distributed and transmit different modulations. To the best knowledge of the author, the present dissertation is the first publication to investigate the use of the S-transform (ST) to devise countermeasures to interference. The original contributions in this context are mainly: • the formulation of a complexity-scalable ST implementable in real time as a bank of filters; • a method for characterizing and localizing multiple in-car jammers through interference snapshots that are collected by separate receivers and analysed with a clever use of the ST; • a preliminary assessment of novel methods for mitigating generic interference at the receiver end by means the ST and more computationally efficient variants of the transform. Besides GNSSs, the countermeasures to interference proposed are equivalently applicable to protect any direct-sequence spread spectrum (DS-SS) communication

    Adaptive sampling by histogram equalization: theory, algorithms, and applications, 2007

    Get PDF
    We present the investigation of a novel, progressive, adaptive sampling scheme. This scheme is based on the distribution of already obtained samples. Even spaced sampling of a function with varying slopes or degrees of complexity yields relatively fewer samples from the regions of higher slopes. Hence, a distribution of these samples will exhibit a relatively lower representation of the function values from regions of higher complexity. When compared to even spaced sampling, a scheme that attempts to progressively equalize the histogram of the function values results in a higher concentration of samples in regions of higher complexity. This is a more efficient distri-bution of sample points, hence the term adaptive sampling. This conjecture is confirmed by numerous examples. Compared to existing adaptive sampling schemes, our approach has the unique ability to efficiently obtain expensive samples from a space with no prior knowledge of the relative levels of variation or complexity in the sampled function. This is a requirement in numerous scientific computing applications. Three models are employed to achieve the equalization in the distribution of sampled function values: (1) an active-walker model, containing elements of the random walk theory, and the motion of Brownian particles, (2) an ant model, based on the simulation of the behavior of ants in search of resources, and (3) an evolutionary algorithm model. Their performances are compared on objective basis such as entropy measure of information, and the Nyquist-Shannon minimum sampling rate for band-limited signals. The development of this adaptive sampling scheme was informed by a need to effi-ciently synthesize hyperspectral images used in place of real images. The performance of the adaptive sampling scheme as an aid to the image synthesis process is evaluated. The synthesized images are used in the development of a measure of clutter in hyperspectral images. This process is described, and the results are presented

    Audio source separation techniques including novel time-frequency representation tools

    Get PDF
    The thesis explores the development of tools for audio representation with applications in Audio Source Separation and in the Music Information Retrieval (MIR) field. A novel constant Q transform was introduced, called IIR-CQT. The transform allows a flexible design and achieves low computational cost. Also, an independent development of the Fan Chirp Transform (FChT) with the focus on the representation of simultaneous sources is studied, which has several applications in the analysis of polyphonic music signals. Dierent applications are explored in the MIR field, some of them directly related with the low-level representation tools that were analyzed. One of these applications is the development of a visualization tool based in the FChT that proved to be useful for musicological analysis . The tool has been made available as an open source, freely available software. The proposed Transform has also been used to detect and track fundamental frequencies of harmonic sources in polyphonic music. Also, the information of the slope of the pitch was used to define a similarity measure between two harmonic components that are close in time. This measure helps to use clustering algorithms to track multiple sources in polyphonic music. Additionally, the FChT was used in the context of the Query by Humming application. One of the main limitations of such application is the construction of a search database. In this work, we propose an algorithm to automatically populate the database of an existing Query by Humming, with promising results. Finally, two audio source separation techniques are studied. The first one is the separation of harmonic signals based on the FChT. The second one is an application for which the fundamental frequency of the sources is assumed to be known (Score Informed Source Separation problem)

    Window Functions and Their Applications in Signal Processing

    Get PDF
    Window functions—otherwise known as weighting functions, tapering functions, or apodization functions—are mathematical functions that are zero-valued outside the chosen interval. They are well established as a vital part of digital signal processing. Window Functions and their Applications in Signal Processing presents an exhaustive and detailed account of window functions and their applications in signal processing, focusing on the areas of digital spectral analysis, design of FIR filters, pulse compression radar, and speech signal processing. Comprehensively reviewing previous research and recent developments, this book: Provides suggestions on how to choose a window function for particular applications Discusses Fourier analysis techniques and pitfalls in the computation of the DFT Introduces window functions in the continuous-time and discrete-time domains Considers two implementation strategies of window functions in the time- and frequency domain Explores well-known applications of window functions in the fields of radar, sonar, biomedical signal analysis, audio processing, and synthetic aperture rada

    Molecular Dynamics Simulation

    Get PDF
    Condensed matter systems, ranging from simple fluids and solids to complex multicomponent materials and even biological matter, are governed by well understood laws of physics, within the formal theoretical framework of quantum theory and statistical mechanics. On the relevant scales of length and time, the appropriate ‘first-principles’ description needs only the Schroedinger equation together with Gibbs averaging over the relevant statistical ensemble. However, this program cannot be carried out straightforwardly—dealing with electron correlations is still a challenge for the methods of quantum chemistry. Similarly, standard statistical mechanics makes precise explicit statements only on the properties of systems for which the many-body problem can be effectively reduced to one of independent particles or quasi-particles. [...
    • …
    corecore