9 research outputs found

    Synergy of nanocomposite force myography and optical fiber-based wrist angle sensing for ambiguous sign classification

    Get PDF
    This paper aims at understanding the capabilities and limitation of combining Nanocomposite Force myography sensors (FMG) and optical fiber sensors in standalone systems and their synergy influence on the classification of ambiguous hand gestures. A set of 10 highly similar hand signs from the fingerspelling of the American sign language is adopted in this study. Force myography (FMG) signals are collected from one healthy subject performing the selected set of gestures with 40 repetitions for each gesture. The K-Tournament Grasshopper Extreme Learner (KTGEL) classifier has been implemented to perform an automated feature selection and hand sign classification with an efficient network size and a high accuracy

    wKSR-NLM: An Ultrasound Despeckling Filter Based on Patch Ratio and Statistical Similarity

    Get PDF
    Ultrasound images are affected by the well known speckle phenomenon, that degrades their perceived quality. In recent years, several denoising approaches have been proposed. Among all, those belonging to the non-local (NL) family have shown interesting performance. The main difference among the proposed NL filters is the metric adopted for measuring the similarity between patches. Within this manuscript, a statistical metric based on the ratio between two patches is presented. Compared to other statistical measurements, the proposed one is able to take into account the texture of the patch, to consider a weighting kernel and to limit the computational burden. A comparative analysis with other despeckling filters is presented. The method provided good balance between noise reduction and details preserving both in case of simulated (by means of Field II software) and real (breast tumor) datasets

    Fast GPU-Based Enhanced Wiener Filter for Despeckling SAR Data

    No full text
    Speckle noise is presented as an inherent dilemma that affects the image processing field, and in particular synthetic aperture radar images. In order to mitigate the adverse effects caused by this phenomenon, several approaches have been introduced in the scientific community during the last three decades including spatial-based and non-local filtering approaches. However, these proposed techniques suffer from some limitations. In fact, it is very difficult to find an approach that is able, on the one hand, to perform well in terms of noise reduction and image detail preservation and, on the other hand, provide a filtering output solution without high computational complexity and within a short processing time. In this paper, we aim to evaluate the performance of a newly-developed despeckling algorithm, presented as an enhancement of the classical Wiener filter and properly designed to work with a Graphics Processing Unit (GPU). The algorithm is tested on both a simulated framework and real Sentinel-1 SAR data. The results, obtained in comparison with other filters, are interesting and promising. Indeed, the proposed method turns out to be a useful filtering instrument in the case of large images by performing the processing within a limited time and ensuring good speckle noise reduction with a considerable image detail preservation

    Fast algorithm for despeckling sentinel-1 SAR data

    No full text
    Speckle noise is an inherent dilemma that alters image processing field particularly synthetic aperture radar images. In order to mitigate its adverse effects, different approaches have been proposed in literature in the last twenty years. However, all approaches suffer from some limits. In particular it is very difficult to find an approach able, at the same time, to preserve image details and provide the solution without requiring high computational complexity and time. This paper aims to test a new despeckling algorithm that is able to jointly guarantee the two previous requirements. The algorithm, based on an evolution of Wiener Filter, modified using Markov Random Fields, is tested and compared on a real Sentinel-1 data. The results are interesting and promising: the proposed algorithm turns to be a useful instrument in case large images or stack of images need to be filtered within a limited time, ensuring good detail preservation

    SAR image restoration via a NL approach based on the KS test

    No full text
    Synthetic Aperture Radar (SAR) image despeckling is still an open issue. Several approaches have been proposed in the last decades. The recently proposed Non Local approaches are often considered as the state of art of SAR despeckling. The difference between the NL algorithms presented in literature is mainly related to the adopted distance metric between patches and on the rule used for averaging the selected pixels. Within this manuscript a new metric for selecting similar patches is presented. The metric is based on the statistical distribution of the complex noisy image. The Kolmogorov-Smirnov (KS) test is adopted to compare the statistical distribution and to select similar patches. The approach has been tested and validate on real data, showing interesting performances

    An enhanced deep learning approach for tectonic fault and fracture extraction in very high resolution optical images

    Get PDF
    International audienceIdentifying and mapping fractures and faults are important in geosciences, especially in earthquake hazard and geological reservoir studies. This mapping can be done manually in optical images of the earth surface, yet it is time consuming and it requires an expertise that may not be available. Building upon a recent prior study, we develop a deep learning approach, based on a variant of a U-Net neural network, and apply it to automate fracture and fault mapping in optical images and topographic data. We show that training the model with a realistic knowledge of fracture and fault uneven distributions and trends, and using a loss function that operates at both pixel and larger scales through the combined use of weighted Binary Cross Entropy and Intersection over Union, greatly improves the predictions, both qualitatively and quantitatively. As we apply the model to a site differing from those used for training, we demonstrate its enhanced generalization capacity

    Comparative Study of Measurement Methods for Embedded Bioimpedance Spectroscopy Systems

    No full text
    Bioimpedance spectroscopy (BIS) is an advanced measurement method for providing information on impedance changes at several frequencies by injecting a low current into a device under test and analyzing the response voltage. Several methods have been elaborated for BIS measurement, calculating impedance with a gain phase detector (GPD), IQ demodulation, and fast Fourier transform (FFT). Although the measurement method has a big influence on the measurement system performance, a systematical comparative study has not been performed yet. In this paper, we compare them based on simulations and experimental studies. To maintain similar conditions in the implementation of all methods, we use the same signal generator followed by a voltage-controlled current source (VCCS) as a signal generator. For performance analysis, three DUTs have been designed to imitate the typical behavior of biological tissues. A laboratory impedance analyzer is used as a reference. The comparison addresses magnitude measurement accuracy, phase measurement accuracy, signal processing, hardware complexity, and power consumption. The result shows that the FFT-based system excels with high accuracy for amplitude and phase measurement while providing the lowest hardware complexity, and power consumption, but it needs a much higher software complexity
    corecore