12 research outputs found
Human activity classification using micro-Doppler signatures and ranging techniques
PhD ThesisHuman activity recognition is emerging as a very import research area due to its potential applications in surveillance, assisted living, and military operations. Various sensors
including accelerometers, RFID, and cameras, have been applied to achieve automatic
human activity recognition. Wearable sensor-based techniques have been well explored.
However, some studies have shown that many users are more disinclined to use wearable
sensors and also may forget to carry them. Consequently, research in this area started
to apply contactless sensing techniques to achieve human activity recognition unobtrusively. In this research, two methods were investigated for human activity recognition,
one method is radar-based and the other is using LiDAR (Light Detection and Ranging). Compared to other techniques, Doppler radar and LiDAR have several advantages
including all-weather and all-day capabilities, non-contact and nonintrusive features.
Doppler radar also has strong penetration to walls, clothes, trees, etc. LiDAR can capture accurate (centimetre-level) locations of targets in real-time. These characteristics
make methods based on Doppler radar and LiDAR superior to other techniques.
Firstly, this research measured micro-Doppler signatures of different human activities
indoors and outdoors using Doppler radars. Micro-Doppler signatures are presented in
the frequency domain to reflect different frequency shifts resulted from different components of a moving target. One of the major differences of this research in relation
to other relevant research is that a simple pulsed radar system of very low-power was
used. The outdoor experiments were performed in places of heavy clutter (grass, trees,
uneven terrains), and confusers including animals and drones, were also considered in the
experiments. Novel usages of machine learning techniques were implemented to perform
subject classification, human activity classification, people counting, and coarse-grained
localisation by classifying the micro-Doppler signatures. For the feature extraction of the micro-Doppler signatures, this research proposed the use of a two-directional twodimensional principal component analysis (2D2PCA). The results show that by applying
2D2PCA, the accuracy results of Support Vector Machine (SVM) and k-Nearest Neighbour (kNN) classifiers were greatly improved. A Convolutional Neural Network (CNN)
was built for the target classifications of type, number, activity, and coarse localisation.
The CNN model obtained very high classification accuracies (97% to 100%) for the outdoor experiments, which were superior to the results obtained by SVM and kNN. The
indoor experiments measured several daily activities with the focus on dietary activities
(eating and drinking). An overall classification rate of 92.8% was obtained in activity
recognition in a kitchen scenario using the CNN. Most importantly, in nearly real-time,
the proposed approach successfully recognized human activities in more than 89% of
the time. This research also investigated the effects on the classification performance of
the frame length of the sliding window, the angle of the direction of movement, and the
number of radars used; providing valuable guidelines for machine learning modeling and
experimental setup of micro-Doppler based research and applications.
Secondly, this research used a two dimensional (2D) LiDAR to perform human activity
detection indoors. LiDAR is a popular surveying method that has been widely used in
localisation, navigation, and mapping. This research proposed the use of a 2D LiDAR
to perform multiple people activity recognition by classifying their trajectories. Points
collected by the LiDAR were clustered and classified into human and non-human classes.
For the human class, the Kalman filter was used to track their trajectories, and the trajectories were further segmented and labelled with their corresponding activities. Spatial
transformation was used for trajectory augmentation in order to overcome the problem
of unbalanced classes and boost the performance of human activity recognition. Finally,
a Long Short-term Memory (LSTM) network and a (Temporal Convolutional Network)
TCN was built to classify the trajectory samples into fifteen activity classes. The TCN
achieved the best result of 99.49% overall accuracy. In comparison, the proposed TCN
slightly outperforms the LSTM. Both of them outperform hidden Markov Model (HMM),
dynamic time warping (DTW), and SVM with a wide margin
Technology 2001: The Second National Technology Transfer Conference and Exposition, volume 2
Proceedings of the workshop are presented. The mission of the conference was to transfer advanced technologies developed by the Federal government, its contractors, and other high-tech organizations to U.S. industries for their use in developing new or improved products and processes. Volume two presents papers on the following topics: materials science, robotics, test and measurement, advanced manufacturing, artificial intelligence, biotechnology, electronics, and software engineering
Research and Creative Activity, July 1, 2020-June 30, 2021: Major Sponsored Programs and Faculty Accomplishments in Research and Creative Activity, University of Nebraska-Lincoln
Foreword by Bob Wilhelm, Vice Chancellor for Research and Economic Development, University of Nebraska-Lincoln:
This booklet highlights successes in research, scholarship and creative activity by University of Nebraska–Lincoln faculty during the fiscal year running July 1, 2020, to June 30, 2021.
It lists investigators, project titles and funding sources on major grants and sponsored awards received during the year; fellowships and other recognitions and honors bestowed on our faculty; books and chapters published by faculty; performances, exhibitions and other examples of creative activity; patents and licensing agreements issued; National Science Foundation I-CORPS teams; and peer-reviewed journal articles and conference presentations. In recognition of the important role faculty have in the undergraduate experience at Nebraska, this booklet notes the students and mentors participating in the Undergraduate Creative Activities and Research Experience (UCARE) and the First-Year Research Experience (FYRE) programs.
While metrics cannot convey the full impact of our work, they are tangible measures of growth. A few achievements of note:
• UNL achieved a record 372 million.
• Industry sponsorship supported 6.48 million in licensing income.
I applaud the Nebraska Research community for its determination and commitment during a challenging year. Your hard work has made it possible for our momentum to continue growing.
Our university is poised for even greater success. The Grand Challenges initiative provides a framework for developing bold ideas to solve society’s greatest issues, which is how we will have the greatest impact as an institution. Please visit research.unl.edu/grandchallenges to learn more. We’re also renewing our campus commitment to a journey of anti-racism and racial equity, which is among the most important work we’ll do.
I am pleased to present this record of accomplishments.
Contents
Awards of 1 Million to 250,000 to 250,000 or More
Arts and Humanities Awards of 249,999
Arts and Humanities Awards of 49,999
Patents
License Agreements
National Science Foundation Innovation Corps Teams
Creative Activity
Books
Recognitions and Honors
Journal Articles 105 Conference Presentations
UCARE and FYRE Projects
Glossar
Low complexity multi-directional in-air ultrasonic gesture recognition using a TCN
On the trend of ultrasound-based gesture recognition, this study introduces the concept of time-sequence classification of ultrasonic patterns induced by hand movements on a microphone array. We refer to time-sequence ultrasound echoes as continuous frequency patterns being received in real-time at different steering angles. The ultrasound source is a single tone continuously being emitted from the center of the microphone array. In the interim, the array beamforms and locates an ultrasonic activity (induced echoes) after which a processing pipeline is initiated to extract band-limited frequency features. These beamformed features are organized in a 2D matrix of size 11 × 30 updated every 10ms on which a Temporal Convolutional Network (TCN) outputs continuous classification. Prior to that, the same TCN is trained to classify Doppler shift variability rate. Using this approach, we show that a user can easily achieve 49 gestures at different steering angles by means of sequence detection. To make it simple to users, we define two Doppler shift variability rates; very slow and very fast which the TCN detects 95-99% of the time. Not only a gesture can be performed at different directions but also the length of each performed gesture can be measured. This leverages the diversity of inair ultrasonic gestures allowing more control capabilities. The process is designed under low-resource settings; that is, given the fact that this real-time process is always-on, the power and memory resources should be optimized. The proposed solution needs 6:2 − 10:2 MMACs and a memory footprint of 6KB allowing such gesture recognition system to be hosted by energyconstrained edge devices such as smart-speakers
Collected Papers (on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics), Volume XI
This eleventh volume of Collected Papers includes 90 papers comprising 988 pages on Physics, Artificial Intelligence, Health Issues, Decision Making, Economics, Statistics, written between 2001-2022 by the author alone or in collaboration with the following 84 co-authors (alphabetically ordered) from 19 countries: Abhijit Saha, Abu Sufian, Jack Allen, Shahbaz Ali, Ali Safaa Sadiq, Aliya Fahmi, Atiqa Fakhar, Atiqa Firdous, Sukanto Bhattacharya, Robert N. Boyd, Victor Chang, Victor Christianto, V. Christy, Dao The Son, Debjit Dutta, Azeddine Elhassouny, Fazal Ghani, Fazli Amin, Anirudha Ghosha, Nasruddin Hassan, Hoang Viet Long, Jhulaneswar Baidya, Jin Kim, Jun Ye, Darjan Karabašević, Vasilios N. Katsikis, Ieva Meidutė-Kavaliauskienė, F. Kaymarm, Nour Eldeen M. Khalifa, Madad Khan, Qaisar Khan, M. Khoshnevisan, Kifayat Ullah,, Volodymyr Krasnoholovets, Mukesh Kumar, Le Hoang Son, Luong Thi Hong Lan, Tahir Mahmood, Mahmoud Ismail, Mohamed Abdel-Basset, Siti Nurul Fitriah Mohamad, Mohamed Loey, Mai Mohamed, K. Mohana, Kalyan Mondal, Muhammad Gulfam, Muhammad Khalid Mahmood, Muhammad Jamil, Muhammad Yaqub Khan, Muhammad Riaz, Nguyen Dinh Hoa, Cu Nguyen Giap, Nguyen Tho Thong, Peide Liu, Pham Huy Thong, Gabrijela Popović, Surapati Pramanik, Dmitri Rabounski, Roslan Hasni, Rumi Roy, Tapan Kumar Roy, Said Broumi, Saleem Abdullah, Muzafer Saračević, Ganeshsree Selvachandran, Shariful Alam, Shyamal Dalapati, Housila P. Singh, R. Singh, Rajesh Singh, Predrag S. Stanimirović, Kasan Susilo, Dragiša Stanujkić, Alexandra Şandru, Ovidiu Ilie Şandru, Zenonas Turskis, Yunita Umniyati, Alptekin Ulutaș, Maikel Yelandi Leyva Vázquez, Binyamin Yusoff, Edmundas Kazimieras Zavadskas, Zhao Loon Wang.