6,735 research outputs found

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Array signal processing for maximum likelihood direction-of-arrival estimation

    Get PDF
    Emitter Direction-of-Arrival (DOA) estimation is a fundamental problem in a variety of applications including radar, sonar, and wireless communications. The research has received considerable attention in literature and numerous methods have been proposed. Maximum Likelihood (ML) is a nearly optimal technique producing superior estimates compared to other methods especially in unfavourable conditions, and thus is of significant practical interest. This paper discusses in details the techniques for ML DOA estimation in either white Gaussian noise or unknown noise environment. Their performances are analysed and compared, and evaluated against the theoretical lower bounds

    Micro-doppler-based in-home aided and unaided walking recognition with multiple radar and sonar systems

    Get PDF
    Published in IET Radar, Sonar and Navigation. Online first 21/06/2016.The potential for using micro-Doppler signatures as a basis for distinguishing between aided and unaided gaits is considered in this study for the purpose of characterising normal elderly gait and assessment of patient recovery. In particular, five different classes of mobility are considered: normal unaided walking, walking with a limp, walking using a cane or tripod, walking with a walker, and using a wheelchair. This presents a challenging classification problem as the differences in micro-Doppler for these activities can be quite slight. Within this context, the performance of four different radar and sonar systems – a 40 kHz sonar, a 5.8 GHz wireless pulsed Doppler radar mote, a 10 GHz X-band continuous wave (CW) radar, and a 24 GHz CW radar – is evaluated using a broad range of features. Performance improvements using feature selection is addressed as well as the impact on performance of sensor placement and potential occlusion due to household objects. Results show that nearly 80% correct classification can be achieved with 10 s observations from the 24 GHz CW radar, whereas 86% performance can be achieved with 5 s observations of sonar

    Distributed data fusion algorithms for inertial network systems

    Get PDF
    New approaches to the development of data fusion algorithms for inertial network systems are described. The aim of this development is to increase the accuracy of estimates of inertial state vectors in all the network nodes, including the navigation states, and also to improve the fault tolerance of inertial network systems. An analysis of distributed inertial sensing models is presented and new distributed data fusion algorithms are developed for inertial network systems. The distributed data fusion algorithm comprises two steps: inertial measurement fusion and state fusion. The inertial measurement fusion allows each node to assimilate all the inertial measurements from an inertial network system, which can improve the performance of inertial sensor failure detection and isolation algorithms by providing more information. The state fusion further increases the accuracy and enhances the integrity of the local inertial states and navigation state estimates. The simulation results show that the two-step fusion procedure overcomes the disadvantages of traditional inertial sensor alignment procedures. The slave inertial nodes can be accurately aligned to the master node

    Magnetic and radar sensing for multimodal remote health monitoring

    Get PDF
    With the increased life expectancy and rise in health conditions related to aging, there is a need for new technologies that can routinely monitor vulnerable people, identify their daily pattern of activities and any anomaly or critical events such as falls. This paper aims to evaluate magnetic and radar sensors as suitable technologies for remote health monitoring purpose, both individually and fusing their information. After experiments and collecting data from 20 volunteers, numerical features has been extracted in both time and frequency domains. In order to analyse and verify the validation of fusion method for different classifiers, a Support Vector Machine with a quadratic kernel, and an Artificial Neural Network with one and multiple hidden layers have been implemented. Furthermore, for both classifiers, feature selection has been performed to obtain salient features. Using this technique along with fusion, both classifiers can detect 10 different activities with an accuracy rate of approximately 96%. In cases where the user is unknown to the classifier, an accuracy of approximately 92% is maintained

    ARTMAP-FTR: A Neural Network For Fusion Target Recognition, With Application To Sonar Classification

    Full text link
    ART (Adaptive Resonance Theory) neural networks for fast, stable learning and prediction have been applied in a variety of areas. Applications include automatic mapping from satellite remote sensing data, machine tool monitoring, medical prediction, digital circuit design, chemical analysis, and robot vision. Supervised ART architectures, called ARTMAP systems, feature internal control mechanisms that create stable recognition categories of optimal size by maximizing code compression while minimizing predictive error in an on-line setting. Special-purpose requirements of various application domains have led to a number of ARTMAP variants, including fuzzy ARTMAP, ART-EMAP, ARTMAP-IC, Gaussian ARTMAP, and distributed ARTMAP. A new ARTMAP variant, called ARTMAP-FTR (fusion target recognition), has been developed for the problem of multi-ping sonar target classification. The development data set, which lists sonar returns from underwater objects, was provided by the Naval Surface Warfare Center (NSWC) Coastal Systems Station (CSS), Dahlgren Division. The ARTMAP-FTR network has proven to be an effective tool for classifying objects from sonar returns. The system also provides a procedure for solving more general sensor fusion problems.Office of Naval Research (N00014-95-I-0409, N00014-95-I-0657

    ARTMAP-FTR: A Neural Network for Object Recognition Through Sonar on a Mobile Robot

    Full text link
    ART (Adaptive Resonance Theory) neural networks for fast, stable learning and prediction have been applied in a variety of areas. Applications include automatic mapping from satellite remote sensing data, machine tool monitoring, medical prediction, digital circuit design, chemical analysis, and robot vision. Supervised ART architectures, called ARTMAP systems, feature internal control mechanisms that create stable recognition categories of optimal size by maximizing code compression while minimizing predictive error in an on-line setting. Special-purpose requirements of various application domains have led to a number of ARTMAP variants, including fuzzy ARTMAP, ART-EMAP, ARTMAP-IC, Gaussian ARTMAP, and distributed ARTMAP. A new ARTMAP variant, called ARTMAP-FTR (fusion target recognition), has been developed for the problem of multi-ping sonar target classification. The development data set, which lists sonar returns from underwater objects, was provided by the Naval Surface Warfare Center (NSWC) Coastal Systems Station (CSS), Dahlgren Division. The ARTMAP-FTR network has proven to be an effective tool for classifying objects from sonar returns. The system also provides a procedure for solving more general sensor fusion problems.Office of Naval Research (N00014-95-I-0409, N00014-95-I-0657

    An efficient iris image thresholding based on binarization threshold in black hole search method

    Get PDF
    In iris recognition system, the segmentation stage is one of the most important stages where the iris is located and then further segmented into outer and lower boundary of iris region. Several algorithms have been proposed in order to segment the outer and lower boundary of the iris region. The aim of this research is to identify the suitable threshold value in order to locate the outer and lower boundaries using Black Hole Search Method. We chose these methods because of the ineffient features of the other methods in image indetification and verifications. The experiment was conducted using three data set; UBIRIS, CASIA and MMU because of their superiority over others. Given that different iris databases have different file formats and quality, the images used for this work are jpeg and bmp. Based on the experimentation, most suitable threshold values for identification of iris aboundaries for different iris databases have been identified. It is therefore compared with the other methods used by other researchers and found out that the values of 0.3, 0.4 and 0.1 for database UBIRIS, CASIA and MMU respectively are more accurate and comprehensive. The study concludes that threshold values vary depending on the database

    An Empirical Evaluation of Deep Learning on Highway Driving

    Full text link
    Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.Comment: Added a video for lane detectio
    • 

    corecore