18,502 research outputs found

    Airborne LiDAR for DEM generation: some critical issues

    Get PDF
    Airborne LiDAR is one of the most effective and reliable means of terrain data collection. Using LiDAR data for DEM generation is becoming a standard practice in spatial related areas. However, the effective processing of the raw LiDAR data and the generation of an efficient and high-quality DEM remain big challenges. This paper reviews the recent advances of airborne LiDAR systems and the use of LiDAR data for DEM generation, with special focus on LiDAR data filters, interpolation methods, DEM resolution, and LiDAR data reduction. Separating LiDAR points into ground and non-ground is the most critical and difficult step for DEM generation from LiDAR data. Commonly used and most recently developed LiDAR filtering methods are presented. Interpolation methods and choices of suitable interpolator and DEM resolution for LiDAR DEM generation are discussed in detail. In order to reduce the data redundancy and increase the efficiency in terms of storage and manipulation, LiDAR data reduction is required in the process of DEM generation. Feature specific elements such as breaklines contribute significantly to DEM quality. Therefore, data reduction should be conducted in such a way that critical elements are kept while less important elements are removed. Given the highdensity characteristic of LiDAR data, breaklines can be directly extracted from LiDAR data. Extraction of breaklines and integration of the breaklines into DEM generation are presented

    CardioCam: Leveraging Camera on Mobile Devices to Verify Users While Their Heart is Pumping

    Get PDF
    With the increasing prevalence of mobile and IoT devices (e.g., smartphones, tablets, smart-home appliances), massive private and sensitive information are stored on these devices. To prevent unauthorized access on these devices, existing user verification solutions either rely on the complexity of user-defined secrets (e.g., password) or resort to specialized biometric sensors (e.g., fingerprint reader), but the users may still suffer from various attacks, such as password theft, shoulder surfing, smudge, and forged biometrics attacks. In this paper, we propose, CardioCam, a low-cost, general, hard-to-forge user verification system leveraging the unique cardiac biometrics extracted from the readily available built-in cameras in mobile and IoT devices. We demonstrate that the unique cardiac features can be extracted from the cardiac motion patterns in fingertips, by pressing on the built-in camera. To mitigate the impacts of various ambient lighting conditions and human movements under practical scenarios, CardioCam develops a gradient-based technique to optimize the camera configuration, and dynamically selects the most sensitive pixels in a camera frame to extract reliable cardiac motion patterns. Furthermore, the morphological characteristic analysis is deployed to derive user-specific cardiac features, and a feature transformation scheme grounded on Principle Component Analysis (PCA) is developed to enhance the robustness of cardiac biometrics for effective user verification. With the prototyped system, extensive experiments involving 25 subjects are conducted to demonstrate that CardioCam can achieve effective and reliable user verification with over 99% average true positive rate (TPR) while maintaining the false positive rate (FPR) as low as 4%

    Dublin City University video track experiments for TREC 2002

    Get PDF
    Dublin City University participated in the Feature Extraction task and the Search task of the TREC-2002 Video Track. In the Feature Extraction task, we submitted 3 features: Face, Speech, and Music. In the Search task, we developed an interactive video retrieval system, which incorporated the 40 hours of the video search test collection and supported user searching using our own feature extraction data along with the donated feature data and ASR transcript from other Video Track groups. This video retrieval system allows a user to specify a query based on the 10 features and ASR transcript, and the query result is a ranked list of videos that can be further browsed at the shot level. To evaluate the usefulness of the feature-based query, we have developed a second system interface that provides only ASR transcript-based querying, and we conducted an experiment with 12 test users to compare these 2 systems. Results were submitted to NIST and we are currently conducting further analysis of user performance with these 2 systems

    Estimating Blood Pressure from Photoplethysmogram Signal and Demographic Features using Machine Learning Techniques

    Full text link
    Hypertension is a potentially unsafe health ailment, which can be indicated directly from the Blood pressure (BP). Hypertension always leads to other health complications. Continuous monitoring of BP is very important; however, cuff-based BP measurements are discrete and uncomfortable to the user. To address this need, a cuff-less, continuous and a non-invasive BP measurement system is proposed using Photoplethysmogram (PPG) signal and demographic features using machine learning (ML) algorithms. PPG signals were acquired from 219 subjects, which undergo pre-processing and feature extraction steps. Time, frequency and time-frequency domain features were extracted from the PPG and their derivative signals. Feature selection techniques were used to reduce the computational complexity and to decrease the chance of over-fitting the ML algorithms. The features were then used to train and evaluate ML algorithms. The best regression models were selected for Systolic BP (SBP) and Diastolic BP (DBP) estimation individually. Gaussian Process Regression (GPR) along with ReliefF feature selection algorithm outperforms other algorithms in estimating SBP and DBP with a root-mean-square error (RMSE) of 6.74 and 3.59 respectively. This ML model can be implemented in hardware systems to continuously monitor BP and avoid any critical health conditions due to sudden changes.Comment: Accepted for publication in Sensor, 14 Figures, 14 Table

    DELINEATION OF ECG FEATURE EXTRACTION USING MULTIRESOLUTION ANALYSIS FRAMEWORK

    Get PDF
    ECG signals have very features time-varying morphology, distinguished as P wave, QRS complex, and T wave. Delineation in ECG signal processing is an important step used to identify critical points that mark the interval and amplitude locations in the features of each wave morphology. The results of ECG signal delineation can be used by clinicians to associate the pattern of delineation point results with morphological classes, besides delineation also produces temporal parameter values of ECG signals. The delineation process includes detecting the onset and offset of QRS complex, P and T waves that represented as pulse width, and also the detection of the peak from each wave feature. The previous study had applied bandpass filters to reduce amplitude of P and T waves, then the signal was passed through non-linear transformations such as derivatives or square to enhance QRS complex. However, the spectrum bandwidth of QRS complex from different patients or same patient may be different, so the previous method was less effective for the morphological variations in ECG signals. This study developed delineation from the ECG feature extraction based on multiresolution analysis with discrete wavelet transform. The mother wavelet used was a quadratic spline function with compact support. Finally, determination of R, T, and P wave peaks were shown by zero crossing of the wavelet transform signals, while the onset and offset were generated from modulus maxima and modulus minima. Results show the proposed method was able to detect QRS complex with sensitivity of 97.05% and precision of 95.92%, T wave detection with sensitivity of 99.79% and precision of 96.46%, P wave detection with sensitivity of 56.69% and precision of 57.78%. The implementation in real time analysis of time-varying ECG morphology will be addressed in the future research

    The acousto-ultrasonic approach

    Get PDF
    The nature and underlying rationale of the acousto-ultrasonic approach is reviewed, needed advanced signal analysis and evaluation methods suggested, and application potentials discussed. Acousto-ultrasonics is an NDE technique combining aspects of acoustic emission methodology with ultrasonic simulation of stress waves. This approach uses analysis of simulated stress waves for detecting and mapping variations of mechanical properties. Unlike most NDE, acousto-ultrasonics is less concerned with flaw detection than with the assessment of the collective effects of various flaws and material anomalies. Acousto-ultrasonics has been applied chiefly to laminated and filament-wound fiber reinforced composites. It has been used to assess the significant strength and toughness reducing effects that can be wrought by combinations of essentially minor flaws and diffuse flaw populations. Acousto-ultrasonics assesses integrated defect states and the resultant variations in properties such as tensile, shear, and flexural strengths and fracture resistance. Matrix cure state, porosity, fiber orientation, fiber volume fraction, fiber-matrix bonding, and interlaminar bond quality are underlying factors
    corecore