285 research outputs found

    mm-Pose: Real-Time Human Skeletal Posture Estimation using mmWave Radars and CNNs

    Full text link
    In this paper, mm-Pose, a novel approach to detect and track human skeletons in real-time using an mmWave radar, is proposed. To the best of the authors' knowledge, this is the first method to detect >15 distinct skeletal joints using mmWave radar reflection signals. The proposed method would find several applications in traffic monitoring systems, autonomous vehicles, patient monitoring systems and defense forces to detect and track human skeleton for effective and preventive decision making in real-time. The use of radar makes the system operationally robust to scene lighting and adverse weather conditions. The reflected radar point cloud in range, azimuth and elevation are first resolved and projected in Range-Azimuth and Range-Elevation planes. A novel low-size high-resolution radar-to-image representation is also presented, that overcomes the sparsity in traditional point cloud data and offers significant reduction in the subsequent machine learning architecture. The RGB channels were assigned with the normalized values of range, elevation/azimuth and the power level of the reflection signals for each of the points. A forked CNN architecture was used to predict the real-world position of the skeletal joints in 3-D space, using the radar-to-image representation. The proposed method was tested for a single human scenario for four primary motions, (i) Walking, (ii) Swinging left arm, (iii) Swinging right arm, and (iv) Swinging both arms to validate accurate predictions for motion in range, azimuth and elevation. The detailed methodology, implementation, challenges, and validation results are presented.Comment: Submitted to IEEE Sensors Journa

    Activity Recognition of Office Space Users using Thermopile Array Sensor

    Get PDF

    Discovering user mobility and activity in smart lighting environments

    Full text link
    "Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging “Internet-of-Things” technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights. The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset. The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users.2017-02-17T00:00:00

    Irish Machine Vision and Image Processing Conference Proceedings 2017

    Get PDF

    Online Semantic Labeling of Deformable Tissues for Medical Applications

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2017. Major: Mechanical Engineering. Advisor: Timothy Kowalewski. 1 computer file (PDF); ix, 133 pages.Surgery remains dangerous, and accurate knowledge of what is presented to the surgeon can be of great importance. One technique to automate this problem is non-rigid tracking of time-of-flight camera scans. This requires accurate sensors and prior information as well as an accurate non-rigid tracking algorithm. This thesis presents an evaluation of four algorithms for tracking and semantic labeling of deformable tissues for medical applications, as well as additional studies on a stretchable flexible smart skin and dynamic 3D bioprinting. The algorithms were developed and tested for this study, and were evaluated in terms of speed and accuracy. The algorithms tested were affine iterative closest point, nested iterative closest point, affine fast point feature histograms, and nested fast point feature histograms. The algorithms were tested against simulated data as well as direct scans. The nested iterative closest point algorithm provided the best balance of speed and accuracy while providing semantic labeling in both simulation as well as using directly scanned data. This shows that fast point feature histograms are not suitable for nonrigid tracking of geometric feature poor human tissues. Secondary experiments were also performed to show that the graphics processing unit provides enough speed to perform iterative closest point algorithms in real-time and that time of flight depth sensing works through an endoscope. Additional research was conducted on related topics, leading to the development of a novel stretchable flexible smart skin sensor and an active 3D bioprinting system for moving human anatomy

    A two phase framework for visible light-based positioning in an indoor environment: performance, latency, and illumination

    Full text link
    Recently with the advancement of solid state lighting and the application thereof to Visible Light Communications (VLC), the concept of Visible Light Positioning (VLP) has been targeted as a very attractive indoor positioning system (IPS) due to its ubiquity, directionality, spatial reuse, and relatively high modulation bandwidth. IPSs, in general, have 4 major components (1) a modulation, (2) a multiple access scheme, (3) a channel measurement, and (4) a positioning algorithm. A number of VLP approaches have been proposed in the literature and primarily focus on a fixed combination of these elements and moreover evaluate the quality of the contribution often by accuracy or precision alone. In this dissertation, we provide a novel two-phase indoor positioning algorithmic framework that is able to increase robustness when subject to insufficient anchor luminaries and also incorporate any combination of the four major IPS components. The first phase provides robust and timely albeit less accurate positioning proximity estimates without requiring more than a single luminary anchor using time division access to On Off Keying (OOK) modulated signals while the second phase provides a more accurate, conventional, positioning estimate approach using a novel geometric constrained triangulation algorithm based on angle of arrival (AoA) measurements. However, this approach is still an application of a specific combination of IPS components. To achieve a broader impact, the framework is employed on a collection of IPS component combinations ranging from (1) pulsed modulations to multicarrier modulations, (2) time, frequency, and code division multiple access, (3) received signal strength (RSS), time of flight (ToF), and AoA, as well as (4) trilateration and triangulation positioning algorithms. Results illustrate full room positioning coverage ranging with median accuracies ranging from 3.09 cm to 12.07 cm at 50% duty cycle illumination levels. The framework further allows for duty cycle variation to include dimming modulations and results range from 3.62 cm to 13.15 cm at 20% duty cycle while 2.06 cm to 8.44 cm at a 78% duty cycle. Testbed results reinforce this frameworks applicability. Lastly, a novel latency constrained optimization algorithm can be overlaid on the two phase framework to decide when to simply use the coarse estimate or when to expend more computational resources on a potentially more accurate fine estimate. The creation of the two phase framework enables robust, illumination, latency sensitive positioning with the ability to be applied within a vast array of system deployment constraints

    Biometrics

    Get PDF
    Biometrics-Unique and Diverse Applications in Nature, Science, and Technology provides a unique sampling of the diverse ways in which biometrics is integrated into our lives and our technology. From time immemorial, we as humans have been intrigued by, perplexed by, and entertained by observing and analyzing ourselves and the natural world around us. Science and technology have evolved to a point where we can empirically record a measure of a biological or behavioral feature and use it for recognizing patterns, trends, and or discrete phenomena, such as individuals' and this is what biometrics is all about. Understanding some of the ways in which we use biometrics and for what specific purposes is what this book is all about

    A Survey of 3D Indoor Localization Systems and Technologies

    Get PDF
    Indoor localization has recently and significantly attracted the interest of the research community mainly due to the fact that Global Navigation Satellite Systems (GNSSs) typically fail in indoor environments. In the last couple of decades, there have been several works reported in the literature that attempt to tackle the indoor localization problem. However, most of this work is focused solely on two-dimensional (2D) localization, while very few papers consider three dimensions (3D). There is also a noticeable lack of survey papers focusing on 3D indoor localization; hence, in this paper, we aim to carry out a survey and provide a detailed critical review of the current state of the art concerning 3D indoor localization including geometric approaches such as angle of arrival (AoA), time of arrival (ToA), time difference of arrival (TDoA), fingerprinting approaches based on Received Signal Strength (RSS), Channel State Information (CSI), Magnetic Field (MF) and Fine Time Measurement (FTM), as well as fusion-based and hybrid-positioning techniques. We provide a variety of technologies, with a focus on wireless technologies that may be utilized for 3D indoor localization such as WiFi, Bluetooth, UWB, mmWave, visible light and sound-based technologies. We critically analyze the advantages and disadvantages of each approach/technology in 3D localization

    Visibility in underwater robotics: Benchmarking and single image dehazing

    Get PDF
    Dealing with underwater visibility is one of the most important challenges in autonomous underwater robotics. The light transmission in the water medium degrades images making the interpretation of the scene difficult and consequently compromising the whole intervention. This thesis contributes by analysing the impact of the underwater image degradation in commonly used vision algorithms through benchmarking. An online framework for underwater research that makes possible to analyse results under different conditions is presented. Finally, motivated by the results of experimentation with the developed framework, a deep learning solution is proposed capable of dehazing a degraded image in real time restoring the original colors of the image.Una de las dificultades más grandes de la robótica autónoma submarina es lidiar con la falta de visibilidad en imágenes submarinas. La transmisión de la luz en el agua degrada las imágenes dificultando el reconocimiento de objetos y en consecuencia la intervención. Ésta tesis se centra en el análisis del impacto de la degradación de las imágenes submarinas en algoritmos de visión a través de benchmarking, desarrollando un entorno de trabajo en la nube que permite analizar los resultados bajo diferentes condiciones. Teniendo en cuenta los resultados obtenidos con este entorno, se proponen métodos basados en técnicas de aprendizaje profundo para mitigar el impacto de la degradación de las imágenes en tiempo real introduciendo un paso previo que permita recuperar los colores originales
    corecore