204 research outputs found

    Fall Detection Using Channel State Information from WiFi Devices

    Get PDF
    Falls among the independently living elderly population are a major public health worry, leading to injuries, loss of confidence to live independently and even to death. Each year, one in three people aged 65 and older falls and one in five of them suffers fatal or non fatal injuries. Therefore, detecting a fall early and alerting caregivers can potentially save lives and increase the standard of living. Existing solutions, e.g. push-button, wearables, cameras, radar, pressure and vibration sensors, have limited public adoption either due to the requirement for wearing the device at all times or installing specialized and expensive infrastructure. In this thesis, a device-free, low cost indoor fall detection system using commodity WiFi devices is presented. The system uses physical layer Channel State Information (CSI) to detect falls. Commercial WiFi hardware is cheap and ubiquitous and CSI provides a wealth of information which helps in maintaining good fall detection accuracy even in challenging environments. The goals of the research in this thesis are the design, implementation and experimentation of a device-free fall detection system using CSI extracted from commercial WiFi devices. To achieve these objectives, the following contributions are made herein. A novel time domain human presence detection scheme is developed as a precursor to detecting falls. As the next contribution, a novel fall detection system is designed and developed. Finally, two main enhancements to the fall detection system are proposed to improve the resilience to changes in operating environment. Experiments were performed to validate system performance in diverse environments. It can be argued that through collection of real world CSI traces, understanding the behavior of CSI during human motion, the development of a signal processing tool-set to facilitate the recognition of falls and validation of the system using real world experiments significantly advances the state of the art by providing a more robust fall detection scheme

    Airborne laser sensors and integrated systems

    Get PDF
    The underlying principles and technologies enabling the design and operation of airborne laser sensors are introduced and a detailed review of state-of-the-art avionic systems for civil and military applications is presented. Airborne lasers including Light Detection and Ranging (LIDAR), Laser Range Finders (LRF), and Laser Weapon Systems (LWS) are extensively used today and new promising technologies are being explored. Most laser systems are active devices that operate in a manner very similar to microwave radars but at much higher frequencies (e.g., LIDAR and LRF). Other devices (e.g., laser target designators and beam-riders) are used to precisely direct Laser Guided Weapons (LGW) against ground targets. The integration of both functions is often encountered in modern military avionics navigation-attack systems. The beneficial effects of airborne lasers including the use of smaller components and remarkable angular resolution have resulted in a host of manned and unmanned aircraft applications. On the other hand, laser sensors performance are much more sensitive to the vagaries of the atmosphere and are thus generally restricted to shorter ranges than microwave systems. Hence it is of paramount importance to analyse the performance of laser sensors and systems in various weather and environmental conditions. Additionally, it is important to define airborne laser safety criteria, since several systems currently in service operate in the near infrared with considerable risk for the naked human eye. Therefore, appropriate methods for predicting and evaluating the performance of infrared laser sensors/systems are presented, taking into account laser safety issues. For aircraft experimental activities with laser systems, it is essential to define test requirements taking into account the specific conditions for operational employment of the systems in the intended scenarios and to verify the performance in realistic environments at the test ranges. To support the development of such requirements, useful guidelines are provided for test and evaluation of airborne laser systems including laboratory, ground and flight test activities

    Novel Hybrid-Learning Algorithms for Improved Millimeter-Wave Imaging Systems

    Full text link
    Increasing attention is being paid to millimeter-wave (mmWave), 30 GHz to 300 GHz, and terahertz (THz), 300 GHz to 10 THz, sensing applications including security sensing, industrial packaging, medical imaging, and non-destructive testing. Traditional methods for perception and imaging are challenged by novel data-driven algorithms that offer improved resolution, localization, and detection rates. Over the past decade, deep learning technology has garnered substantial popularity, particularly in perception and computer vision applications. Whereas conventional signal processing techniques are more easily generalized to various applications, hybrid approaches where signal processing and learning-based algorithms are interleaved pose a promising compromise between performance and generalizability. Furthermore, such hybrid algorithms improve model training by leveraging the known characteristics of radio frequency (RF) waveforms, thus yielding more efficiently trained deep learning algorithms and offering higher performance than conventional methods. This dissertation introduces novel hybrid-learning algorithms for improved mmWave imaging systems applicable to a host of problems in perception and sensing. Various problem spaces are explored, including static and dynamic gesture classification; precise hand localization for human computer interaction; high-resolution near-field mmWave imaging using forward synthetic aperture radar (SAR); SAR under irregular scanning geometries; mmWave image super-resolution using deep neural network (DNN) and Vision Transformer (ViT) architectures; and data-level multiband radar fusion using a novel hybrid-learning architecture. Furthermore, we introduce several novel approaches for deep learning model training and dataset synthesis.Comment: PhD Dissertation Submitted to UTD ECE Departmen

    The University Defence Research Collaboration In Signal Processing

    Get PDF
    This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations. The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour

    Fusion Of Multiple Inertial Measurements Units And Its Application In Reduced Cost, Size, Weight, And Power Synthetic Aperture Radars

    Get PDF
    Position navigation and timing (PNT) is the concept of determining where an object is on the Earth (position), the destination of the object (navigation), and when the object is in these positions (timing). In autonomous applications, these three attributes are crucial to determining the control inputs required to control and move the platform through an area. Traditionally, the position information is gathered using mainly a global positioning system (GPS) which can provide positioning sufficient for most PNT applications. However, GPS navigational solutions are limited by slower update rates, limited accuracy, and can be unreliable. GPS solutions update slower due to the signal having to travel a great distance from the satellite to the receiver. Additionally, the accuracy of the GPS solution relies on the environment of the receiver and the effects caused by additional reflections that introduce ambiguity into the positional solution. As result, the positional solution can become unstable or unreliable if the ambiguities are significant and greatly impact the accuracy of the positional solution. A common solution to addressing the shortcomings of the GPS solution is to introduce an additional sensor focused on measuring the physical state of the platform. The sensors popularly used are inertial measurement units (IMU) and can help provide faster positional accuracy as the transmission time is eliminated. Furthermore, the IMU is directly measuring physical forces that contribute to the position of the platform, therefore, the ambiguities caused by additional signal reflections are also eliminated. Although the introduction of the IMU helps mitigate some of the shortcomings of GPS, the sensors introduce a slightly different set of challenges. Since the IMUs directly measure the physical forces experienced by the platform, the position is estimated using these measurements. The estimates of position utilize the previously known position and estimate the changes to the position based on the accelerations measured by the IMUs. As the IMUs intrinsically have sensor noise and errors in their measurements, the noise errors directly impact the accuracy of the position estimated. These inaccuracies are further compounded as the erroneous position estimate is now used as the basis for future position calculations. Inertial navigation systems (INS) have been developed to pair the IMUs with the GPS to overcome the challenges brought by each sensor independently. The data provided from each sensor is processed using a technique known as data fusion where the statistical likelihood of each positional solution is evaluated and used to estimate the most likely position solution given the observations from each sensor. Data fusion allows for the navigation solution to provide a positional solution at the sampling rate of the fastest sensor while also limiting the compounding errors intrinsic to using IMUs. Synthetic aperture radar (SAR) is an application that utilizes a moving radar to synthetically generate a larger aperture to create images of a target scene. The larger aperture allows for a finer spatial resolution resulting in higher quality SAR images. For synthetic aperture radar applications, the PNT solution is fundamental to producing a quality image as the range to a target is only reported by the radar. To form an image, the range to each target must be aligned over the coherent processing interval (CPI). In doing so, the energy reflected from the target as the radar is moving can be combined coherently and resolved to a pixel in the image product. In practice, the position of the radar is measured using a navigational solution utilizing a GPS and IMU. Inaccuracies in these solutions directly contribute to the image quality in a SAR system because the measured range from the radar will not agree with the calculated range to the location represented by the pixel. As a result, the final image becomes unfocused and the target will be blurred across multiple pixels. For INS systems, increasing the accuracy of the final position estimate is dependent on the accuracy of the sensors in the system. An easy way to increase the accuracy of the INS solution is to upgrade to a higher grade IMU. As a result, the errors compounded by the IMU estimations are minimized because the intrinsic noise perturbations are smaller. The trade-off is the IMU sensors increase in cost, size, weight, and power (C-SWAP) as the quality of the sensor increases. The increase in C-SWAP is a challenge of utilizing higher grade IMUs in INS navigational solutions for SAR applications. This problem is amplified when developing miniaturized SAR systems. In this dissertation, a method of leveraging the benefits of data fusion to combine multiple IMUs to produce higher accuracy INS solutions is presented. Specifically, the C-SWAP can be reduced when utilizing lower-quality IMUs. The use of lower quality IMUs presents an additional challenge of providing positional solutions at the rates required for SAR. A method of interpolating the position provided by the fusion algorithm while maintaining positional accuracy is also presented in this dissertation. The methods presented in this dissertation are successful in providing accurate positional solutions from lower C-SWAP INS. The presented methods are verified in simulations of motion paths and the results of the fusion algorithms are evaluated for accuracy. The presented methods are instrumented in both ground and flight tests and the results are compared to a 3rd party accurate position solution for an accuracy metric. Lastly, the algorithms are implemented in a miniaturized SAR system and both ground and airborne SAR tests are conducted to evaluate the effectiveness of the algorithms. In general, the designed algorithms are capable of producing positional accuracy at the rate required to focus SAR images in a miniaturized SAR system

    Multi-Object Tracking System based on LiDAR and RADAR for Intelligent Vehicles applications

    Get PDF
    El presente Trabajo Fin de Grado tiene como objetivo el desarrollo de un Sistema de Detección y Multi-Object Tracking 3D basado en la fusión sensorial de LiDAR y RADAR para aplicaciones de conducción autónoma basándose en algoritmos tradicionales de Machine Learning. La implementación realizada está basada en Python, ROS y cumple requerimientos de tiempo real. En la etapa de detección de objetos se utiliza el algoritmo de segmentación del plano RANSAC, para una posterior extracción de Bounding Boxes mediante DBSCAN. Una Late Sensor Fusion mediante Intersection over Union 3D y un sistema de tracking BEV-SORT completan la arquitectura propuesta.This Final Degree Project aims to develop a 3D Multi-Object Tracking and Detection System based on the Sensor Fusion of LiDAR and RADAR for autonomous driving applications based on traditional Machine Learning algorithms. The implementation is based on Python, ROS and complies with real-time requirements. In the Object Detection stage, the RANSAC plane segmentation algorithm is used, for a subsequent extraction of Bounding Boxes using DBSCAN. A Late Sensor Fusion using Intersection over Union 3D and a BEV-SORT tracking system complete the proposed architecture.Grado en Ingeniería en Electrónica y Automática Industria

    Radar Technology

    Get PDF
    In this book “Radar Technology”, the chapters are divided into four main topic areas: Topic area 1: “Radar Systems” consists of chapters which treat whole radar systems, environment and target functional chain. Topic area 2: “Radar Applications” shows various applications of radar systems, including meteorological radars, ground penetrating radars and glaciology. Topic area 3: “Radar Functional Chain and Signal Processing” describes several aspects of the radar signal processing. From parameter extraction, target detection over tracking and classification technologies. Topic area 4: “Radar Subsystems and Components” consists of design technology of radar subsystem components like antenna design or waveform design

    A comprehensive multimodal dataset for contactless lip reading and acoustic analysis

    Get PDF
    Small-scale motion detection using non-invasive remote sensing techniques has recently garnered significant interest in the field of speech recognition. Our dataset paper aims to facilitate the enhancement and restoration of speech information from diverse data sources for speakers. In this paper, we introduce a novel multimodal dataset based on Radio Frequency, visual, text, audio, laser and lip landmark information, also called RVTALL. Specifically, the dataset consists of 7.5 GHz Channel Impulse Response (CIR) data from ultra-wideband (UWB) radars, 77 GHz frequency modulated continuous wave (FMCW) data from millimeter wave (mmWave) radar, visual and audio information, lip landmarks and laser data, offering a unique multimodal approach to speech recognition research. Meanwhile, a depth camera is adopted to record the landmarks of the subject’s lip and voice. Approximately 400 minutes of annotated speech profiles are provided, which are collected from 20 participants speaking 5 vowels, 15 words, and 16 sentences. The dataset has been validated and has potential for the investigation of lip reading and multimodal speech recognition
    corecore