153 research outputs found

    Hyperspectral Remote Sensing Data Analysis and Future Challenges

    Full text link

    Sensitivity Analysis for Measurements of Multipath Parameters Pertinent to TOA based Indoor Geolocation

    Get PDF
    Recently, indoor geolocation technologies has been attracting tremendous attention. For indoor environments, the fine time resolution of ultra-wideband (UWB) signals enables the potential of accurate distance measurement of the direct path (DP) between a number of reference sources and the people or assets of interest. However, Once the DP is not available or is shadowed, substantial errors will be introduced into the ranging measurements, leading to large localization errors when measurements are combined from multiple sources. The measurement accuracy in undetected direct path (UDP) conditions can be improved in some cases by exploiting the geolocation information contained in the indirect path measurements. Therefore, the dynamic spatial behavior of paths is an important issue for positioning techniques based on TOA of indirect paths. The objectives of this thesis are twofold. The first is to analyze the sensitivity of TOA estimation techniques based on TOA of the direct path. we studied the effect of distance, bandwidth and multipath environment on the accuracy of various TOA estimation techniques. The second is to study the sensitivity of multipath parameters pertinent to TOA estimation techniques based on the TOA of the indirect paths. We mainly looked into the effect of distance, bandwidth, threshold for picking paths, and multipath environment on the number of multipath components(MPCs) and path persistency. Our results are based on data from a new measurement campaign conducted on the 3rd floor of AK laboratory. For the TOA estimation techniques based on DP, the line of sight (LOS) scenario provides greatest accuracy and these TOA estimation techniques are most sensitive to bandwidth availability in obstructed line of sight (OLOS) scenario. All the TOA estimation algorithms perform poorly in the UDP scenario although the use of higher bandwidth can reduce the ranging error to some extent. Based on our processed results, The proposal for selecting the appropriate TOA estimation technique with certain constrains is given. The sensitivity study of multipath parameters pertinent to indirect-path-based TOA estimation techniques shows that the number of MPCs is very sensitive to the threshold for picking paths and to the noise threshold. It generally decreases as the distance increase while larger bandwidth always resolves more MPCs. The multipath components behave more persistently in line of sight (LOS) and obstructed line of sight (OLOS) scenarios than in UDP scenarios, and the use of larger bandwidth and higher threshold for picking paths also result in more persistent paths

    Minsight: A Fingertip-Sized Vision-Based Tactile Sensor for Robotic Manipulation

    Full text link
    Intelligent interaction with the physical world requires perceptual abilities beyond vision and hearing; vibrant tactile sensing is essential for autonomous robots to dexterously manipulate unfamiliar objects or safely contact humans. Therefore, robotic manipulators need high-resolution touch sensors that are compact, robust, inexpensive, and efficient. The soft vision-based haptic sensor presented herein is a miniaturized and optimized version of the previously published sensor Insight. Minsight has the size and shape of a human fingertip and uses machine learning methods to output high-resolution maps of 3D contact force vectors at 60 Hz. Experiments confirm its excellent sensing performance, with a mean absolute force error of 0.07 N and contact location error of 0.6 mm across its surface area. Minsight's utility is shown in two robotic tasks on a 3-DoF manipulator. First, closed-loop force control enables the robot to track the movements of a human finger based only on tactile data. Second, the informative value of the sensor output is shown by detecting whether a hard lump is embedded within a soft elastomer with an accuracy of 98%. These findings indicate that Minsight can give robots the detailed fingertip touch sensing needed for dexterous manipulation and physical human-robot interaction

    Biologically inspired processing of radar and sonar target echoes

    Get PDF
    Modern radar and sonar systems rely on active sensing to accomplish a variety of tasks, including detection and classification of targets, accurate localization and tracking, autonomous navigation and collision avoidance. Bats have relied on active sensing for over 50 million years and their echolocation system provides remarkable perceptual and navigational performance that are of envy to synthetic systems. The aim of this study is to investigate the mechanisms bats use to process echo acoustic signals and investigate if there are lessons that can be learned and ultimately applied to radar systems. The basic principles of the bat auditory system processing are studied and applied to radio frequencies. A baseband derivative of the Spectrogram Correlation and Transformation (SCAT) model of the bat auditory system, called Baseband SCAT (BSCT), has been developed. The BSCT receiver is designed for processing radio-frequency signals and to allow an analytical treatment of the expected performance. Simulations and experiments have been carried out to confirm that the outputs of interest of both models are “equivalent”. The response of the BSCT to two closely spaced targets is studied and it is shown that the problem of measuring the relative distance between two targets is converted to a problem of measuring the range to a single target. Nearly double improvement in the resolution between two close scatterers is achieved with respect to the matched filter. The robustness of the algorithm has been demonstrated through laboratory measurements using ultrasound and radio frequencies (RF). Pairs of spheres, flat plates and vertical rods were used as targets to represent two main reflectors

    Rich Feature Distillation with Feature Affinity Module for Efficient Image Dehazing

    Full text link
    Single-image haze removal is a long-standing hurdle for computer vision applications. Several works have been focused on transferring advances from image classification, detection, and segmentation to the niche of image dehazing, primarily focusing on contrastive learning and knowledge distillation. However, these approaches prove computationally expensive, raising concern regarding their applicability to on-the-edge use-cases. This work introduces a simple, lightweight, and efficient framework for single-image haze removal, exploiting rich "dark-knowledge" information from a lightweight pre-trained super-resolution model via the notion of heterogeneous knowledge distillation. We designed a feature affinity module to maximize the flow of rich feature semantics from the super-resolution teacher to the student dehazing network. In order to evaluate the efficacy of our proposed framework, its performance as a plug-and-play setup to a baseline model is examined. Our experiments are carried out on the RESIDE-Standard dataset to demonstrate the robustness of our framework to the synthetic and real-world domains. The extensive qualitative and quantitative results provided establish the effectiveness of the framework, achieving gains of upto 15\% (PSNR) while reducing the model size by \sim20 times.Comment: Preprint version. Accepted at Opti

    Deep learned Electrical Resistance Tomography Applications in Structural Health Monitoring

    Get PDF
    In recent studies, electrical resistance tomography (ERT) has been explored as a non-destructive testing imaging modality in conjunction with structural health monitoring (SHM). This imaging modality has been shown to be able to locate cracks in cement-based materials as well as reconstruct strain and stress distributions in nano-composite materials. However, due to the ill-conditioned nature of the ERT inverse problem, the computational cost of solving such problems can be high. In order to reduce the overall computational cost of solving the ERT inverse problem in practical applications, we propose using a deep learning approach to address this challenge. The deep-learned ERT frameworks have been successfully implemented and validated using simulation and experimental data for various materials relevant to SHM. The results indicate that the deep-learned ERT frameworks are feasible for implementation in SHM applications

    Automatic Target Recognition in Synthetic Aperture Radar Imagery: A State-of-the-Art Review

    Get PDF
    The purpose of this paper is to survey and assess the state-of-the-art in automatic target recognition for synthetic aperture radar imagery (SAR-ATR). The aim is not to develop an exhaustive survey of the voluminous literature, but rather to capture in one place the various approaches for implementing the SAR-ATR system. This paper is meant to be as self-contained as possible, and it approaches the SAR-ATR problem from a holistic end-to-end perspective. A brief overview for the breadth of the SAR-ATR challenges is conducted. This is couched in terms of a single-channel SAR, and it is extendable to multi-channel SAR systems. Stages pertinent to the basic SAR-ATR system structure are defined, and the motivations of the requirements and constraints on the system constituents are addressed. For each stage in the SAR-ATR processing chain, a taxonomization methodology for surveying the numerous methods published in the open literature is proposed. Carefully selected works from the literature are presented under the taxa proposed. Novel comparisons, discussions, and comments are pinpointed throughout this paper. A two-fold benchmarking scheme for evaluating existing SAR-ATR systems and motivating new system designs is proposed. The scheme is applied to the works surveyed in this paper. Finally, a discussion is presented in which various interrelated issues, such as standard operating conditions, extended operating conditions, and target-model design, are addressed. This paper is a contribution toward fulfilling an objective of end-to-end SAR-ATR system design

    Active Backscattering Positioning System Using Innovative Harmonic Oscillator Tags for Future Internet of Things: Theory and Experiments

    Get PDF
    RÉSUMÉ D'ici 2020, l'Internet des objets (IoT) permettra probablement de créer 25 milliards d'objets connectés, 44 ZB de données et de débloquer 11 000 milliards de dollars d’opportunités commerciales. Par conséquent, ce sujet a suscité d’énormes intérêts de recherche dans le monde académique entier. L'une des technologies clés pour l'IoT concerne le positionnement physique intérieur précis. Le principal objectif dans ce domaine est le développement d'un système de positionnement intérieur avec une grande précision, une haute résolution, un fonctionnement à plusieurs cibles, un faible coût, un faible encombrement et une faible consommation d'énergie. Le système de positionnement intérieur conventionnel basé sur les technologies de Wi-Fi ou d'identification par radiofréquence (RFID) ne peut répondre à ces exigences. Principalement parce que leur appareil et leur signal ne sont pas conçus spécialement pour atteindre les objectifs visés. Les chercheurs ont découvert qu'en mettant en oeuvre de différents types de modulation sur les étiquettes, le radar à onde continue (CW) et ses dérivés deviennent des solutions prometteuses. Les activités de recherche présentées dans cette thèse sont menées dans le but de développer des systèmes de positionnement en intérieur bidimensionnel (2-D) à plusieurs cibles basées sur des étiquettes actives à rétrodiffusion harmonique avec une technique à onde continue modulée en fréquence (FMCW). Les contributions de cette thèse peuvent être résumées comme suit: Tout d'abord, la conception d'un circuit actif harmonique, plus spécifiquement une classe d'oscillateurs harmoniques innovants utilisée comme composant central des étiquettes actives dans notre système, implique une méthodologie de conception de signal de grande taille et des installations de caractérisation. L’analyseur de réseau à grand signal (LSNA) est un instrument émergent basé sur les fondements théoriques du cadre de distorsion polyharmonique (PHD). Bien qu'ils soient disponibles dans le commerce depuis 2008, des organismes de normalisation et de recherche tels que l’Institut national des normes et de la technologie (NIST) des États-Unis travaillent toujours à la mise au point d'un standard largement reconnu permettant d'évaluer et de comparer leurs performances. Dans ce travail, un artefact de génération multi-harmonique pour la vérification LSNA est développé. C'est un dispositif actif capable de générer les 5 premières harmoniques d'un signal d'entrée avec une réponse ultra-stables en amplitude et en phase, quelle que soit la variation de l'impédance de la charge.----------ABSTRACT By 2020, the internet of things (IoT) will probably enable 25 billion connected objects, create 44 ZB data and unlock 11 trillion US dollar business opportunities. Therefore, this topic has been attracting tremendous research interests in the entire academic world. One of the key enabling technologies for IoT is concerned with accurate indoor physical positioning. The development of such an indoor positioning system with high accuracy, high resolution, multitarget operation, low cost, small footprint, and low power consumption is the major objective in this area. The conventional indoor positioning system based on WiFi or radiofrequency identification (RFID) technology cannot fulfill these requirements mainly because their device and signal are not purposely designed for achieving the targeted goals. Researchers have found that by implementing different types of modulation on the tags, continuous-wave (CW) radar and its derivatives become promising solutions. The research activities presented in this Ph.D. thesis are carried out towards the goal of developing multitarget two-dimensional (2-D) indoor positioning systems based on harmonic backscattering active tags together with a frequency-modulated continuous-wave (FMCW) technique. Research contributions of this thesis can be summarized as follows: First of all, the design of a harmonic active circuit, more specifically, a class of innovative harmonic oscillators used as the core component of active tags in our system, involves a large signal design methodology and characterization facilities. The large signal network analyzer (LSNA) is an emerging instrument based on the theoretical foundation for the Poly-Harmonic Distortion (PHD) framework. Although they have been commercially available since 2008, standard and research organizations such as the National Institute of Standards and Technology (NIST) of the US are still working towards a widely-recognized standard to evaluate and cross-reference their performances. In this work, a multi-harmonic generation artifact for LSNA verification is developed. It is an active device that can generate the first 5 harmonics of an input signal with ultra-stable amplitude and phase response regardless of the load impedance variation

    Aeronautical engineering: A continuing bibliography with indexes (supplement 295)

    Get PDF
    This bibliography lists 581 reports, articles, and other documents introduced into the NASA Scientific and Technical Information System in Sep. 1993. Subject coverage includes: design, construction and testing of aircraft and aircraft engines; aircraft components, equipment, and systems; ground support systems; and theoretical and applied aspects of aerodynamics and general fluid dynamics

    Real-time Ultrasound Signals Processing: Denoising and Super-resolution

    Get PDF
    Ultrasound acquisition is widespread in the biomedical field, due to its properties of low cost, portability, and non-invasiveness for the patient. The processing and analysis of US signals, such as images, 2D videos, and volumetric images, allows the physician to monitor the evolution of the patient's disease, and support diagnosis, and treatments (e.g., surgery). US images are affected by speckle noise, generated by the overlap of US waves. Furthermore, low-resolution images are acquired when a high acquisition frequency is applied to accurately characterise the behaviour of anatomical features that quickly change over time. Denoising and super-resolution of US signals are relevant to improve the visual evaluation of the physician and the performance and accuracy of processing methods, such as segmentation and classification. The main requirements for the processing and analysis of US signals are real-time execution, preservation of anatomical features, and reduction of artefacts. In this context, we present a novel framework for the real-time denoising of US 2D images based on deep learning and high-performance computing, which reduces noise while preserving anatomical features in real-time execution. We extend our framework to the denoise of arbitrary US signals, such as 2D videos and 3D images, and we apply denoising algorithms that account for spatio-temporal signal properties into an image-to-image deep learning model. As a building block of this framework, we propose a novel denoising method belonging to the class of low-rank approximations, which learns and predicts the optimal thresholds of the Singular Value Decomposition. While previous denoise work compromises the computational cost and effectiveness of the method, the proposed framework achieves the results of the best denoising algorithms in terms of noise removal, anatomical feature preservation, and geometric and texture properties conservation, in a real-time execution that respects industrial constraints. The framework reduces the artefacts (e.g., blurring) and preserves the spatio-temporal consistency among frames/slices; also, it is general to the denoising algorithm, anatomical district, and noise intensity. Then, we introduce a novel framework for the real-time reconstruction of the non-acquired scan lines through an interpolating method; a deep learning model improves the results of the interpolation to match the target image (i.e., the high-resolution image). We improve the accuracy of the prediction of the reconstructed lines through the design of the network architecture and the loss function. %The design of the deep learning architecture and the loss function allow the network to improve the accuracy of the prediction of the reconstructed lines. In the context of signal approximation, we introduce our kernel-based sampling method for the reconstruction of 2D and 3D signals defined on regular and irregular grids, with an application to US 2D and 3D images. Our method improves previous work in terms of sampling quality, approximation accuracy, and geometry reconstruction with a slightly higher computational cost. For both denoising and super-resolution, we evaluate the compliance with the real-time requirement of US applications in the medical domain and provide a quantitative evaluation of denoising and super-resolution methods on US and synthetic images. Finally, we discuss the role of denoising and super-resolution as pre-processing steps for segmentation and predictive analysis of breast pathologies
    corecore