124 research outputs found
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs
We address the problem of making human motion capture in the wild more
practical by using a small set of inertial sensors attached to the body. Since
the problem is heavily under-constrained, previous methods either use a large
number of sensors, which is intrusive, or they require additional video input.
We take a different approach and constrain the problem by: (i) making use of a
realistic statistical body model that includes anthropometric constraints and
(ii) using a joint optimization framework to fit the model to orientation and
acceleration measurements over multiple frames. The resulting tracker Sparse
Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors
(attached to the wrists, lower legs, back and head) and works for arbitrary
human motions. Experiments on the recently released TNT15 dataset show that,
using the same number of sensors, SIP achieves higher accuracy than the dataset
baseline without using any video data. We further demonstrate the effectiveness
of SIP on newly recorded challenging motions in outdoor scenarios such as
climbing or jumping over a wall.Comment: 12 pages, Accepted at Eurographics 201
An Efficient Fusion Scheme for Human Hand Trajectory Reconstruction Using Inertial Measurement Unit and Kinect Camera
The turn of 21st century has witnessed an evolving trend in wearable devices research and improvements in human-computer interfaces. In such systems, position information of human hands in 3-D space has become extremely important as various applications require knowledge of user’s hand position. A promising example of which is a wearable ring that can naturally and ubiquitously reconstruct handwriting based on motion of human hand in an indoor environment. A common approach is to exploit the portability and affordability of commercially available inertial measurement units (IMU). However, these IMUs suffer from drift errors accumulated by double integration of acceleration readings. This process accrues intrinsic errors coming from sensor’s sensitivity, factory bias, thermal noise, etc., which result in large deviation from position’s ground truth over time. Other approaches utilize optical sensors for better position estimation, but these sensors suffer from occlusion and environment lighting conditions. In this thesis, we first present techniques to calibrate IMU, minimizing undesired effects of intrinsic imperfection resided within cheap MEMS sensors. We then introduce a Kalman filter-based fusion scheme incorporating data collected from IMU and Kinect camera, which is shown to overcome each sensor’s disadvantages and improve the overall quality of reconstructed trajectory of human hands
Adaptive Indoor Pedestrian Tracking Using Foot-Mounted Miniature Inertial Sensor
This dissertation introduces a positioning system for measuring and tracking the momentary location of a pedestrian, regardless of the environmental variations. This report proposed a 6-DOF (degrees of freedom) foot-mounted miniature inertial sensor for indoor localization which has been tested with simulated and real-world data. To estimate the orientation, velocity and position of a pedestrian we describe and implement a Kalman filter (KF) based framework, a zero-velocity updates (ZUPTs) methodology, as well as, a zero-velocity (ZV) detection algorithm. The novel approach presented in this dissertation uses the interactive multiple model (IMM) filter in order to determine the exact state of pedestrian with changing dynamics. This work evaluates the performance of the proposed method in two different ways: At first a vehicle traveling in a straight line is simulated using commonly used kinematic motion models in the area of tracking (constant velocity (CV), constant acceleration (CA) and coordinated turn (CT) models) which demonstrates accurate state estimation of targets with changing dynamics is achieved through the use of multiple model filter models. We conclude by proposing an interactive multiple model estimator based adaptive indoor pedestrian tracking system for handling dynamic motion which can incorporate different motion types (walking, running, sprinting and ladder climbing) whose threshold is determined individually and IMM adjusts itself adaptively to correct the change in motion models. Results indicate that the overall IMM performance will at all times be similar to the best individual filter model within the IMM
Proceedings of the 5th Baltic Mechatronics Symposium - Espoo April 17, 2020
The Baltic Mechatronics Symposium is annual symposium with the objective to provide a forum for young scientists from Baltic countries to exchange knowledge, experience, results and information in large variety of fields in mechatronics. The symposium was organized in co-operation with Taltech and Aalto University. Due to Coronavirus COVID-19 the symposium was organized as a virtual conference.
The content of the proceedings1. Monitoring Cleanliness of Public Transportation with Computer Vision2. Device for Bending and Cutting Coaxial Wires for Cryostat in Quantum Computing3. Inertial Measurement Method and Application for Bowling Performance Metrics4. Mechatronics Escape Room5. Hardware-In-the-Loop Test Setup for Tuning Semi-Active Hydraulic Suspension Systems6. Newtonian Telescope Design for Stand-off Laser Induced Breakdown Spectroscopy7. Simulation and Testing of Temperature Behavior in Flat Type Linear Motor Carrier8. Powder Removal Device for Metal Additive Manufacturing9. Self-Leveling Spreader Beam for Adjusting the Orientation of an Overhead Crane Loa
A calibration method for MEMS inertial sensors based on optical techniques.
Dong, Zhuxin.Thesis (M.Phil.)--Chinese University of Hong Kong, 2008.Includes bibliographical references (leaves 77-80).Abstracts in English and Chinese.Abstract --- p.ii摘要 --- p.iiiAcknowledgements --- p.ivTable of Contents --- p.vList of Figures --- p.viiList of Tables --- p.ixChapter Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Architecture of UDWI --- p.3Chapter 1.2 --- Background of IMU Sensor Calibration --- p.5Chapter 1.3 --- Organization --- p.7Chapter Chapter 2 --- 2D Motion Calibration --- p.10Chapter 2.1 --- Experimental Platform --- p.10Chapter 2.1.1 --- Transparent Table --- p.10Chapter 2.2 --- Matching Algorithm --- p.13Chapter 2.2.1 --- Motion Analysis --- p.13Chapter 2.2.2 --- Core Algorithm and Matching Criterion --- p.14Chapter 2.3 --- Usage of High Speed Camera --- p.17Chapter 2.4 --- Functions Realized --- p.17Chapter Chapter 3 --- Usage of Camera Calibration --- p.21Chapter 3.1 --- Introduction to Camera Calibration --- p.21Chapter 3.1.1 --- Related Coordinate Frames --- p.21Chapter 3.1.2 --- Pin-Hole Model --- p.24Chapter 3.2 --- Calibration for Nonlinear Model --- p.27Chapter 3.3 --- Implementation of Process to Calibrate Camera --- p.28Chapter 3.3.1 --- Image Capture --- p.28Chapter 3.3.2 --- Define World Frame and Extract Corners --- p.28Chapter 3.3.3 --- Main Calibration --- p.30Chapter 3.4 --- Calibration Results of High Speed Camera --- p.33Chapter 3.4.1 --- Lens Selection --- p.33Chapter 3.4.2 --- Property of High Speed Camera --- p.34Chapter Chapter 4 --- 3D Attitude Calibration --- p.36Chapter 4.1 --- The Necessity of Attitude Calibration --- p.36Chapter 4.2 --- Stereo Vision and 3D Reconstruction --- p.37Chapter 4.2.1 --- Physical Meaning and Mathematical Model Proof --- p.37Chapter 4.2.2 --- 3D Point Reconstruction --- p.38Chapter 4.3 --- Example of 3D Point Reconstruction --- p.40Chapter 4.4 --- Idea of Attitude Calibration --- p.42Chapter Chapter 5 --- Experimental Results --- p.45Chapter 5.1 --- Calculation of Proportional Parameter --- p.45Chapter 5.2 --- Accuracy Test of Stroke Reconstruction --- p.46Chapter 5.3 --- Writing Experiments of 26 Letters --- p.47Chapter 5.3.1 --- Experimental Results of Letter b --- p.48Chapter 5.3.2 --- Experimental Results of Letter n with ZVC --- p.51Chapter 5.3.3 --- Experimental Results of Letter u --- p.54Chapter 5.4 --- Writing of Single Letter s - Multiple Tests --- p.56Chapter 5.5 --- Analysis on Resolution Property of Current Vision Algorithm --- p.58Chapter 5.5.1 --- Resolution of Current Algorithm --- p.58Chapter 5.5.2 --- Tests with Various Filters --- p.59Chapter 5.6 --- Calculation of Static Attitude --- p.61Chapter Chapter 6 --- Future Work --- p.64Chapter 6.1 --- Another Multiple Tests of Letter k --- p.64Chapter 6.2 --- Letter Recognition Based on Neural Networks Classification --- p.66Chapter Chapter 7 --- Conclusion --- p.69Chapter 7.1 --- Calibration ofMAG-μlMU Sensors --- p.69Chapter 7.2 --- Calibration of Accelerometers --- p.70Chapter 7.3 --- Calibration of Attitude --- p.70Chapter 7.4 --- Future Work --- p.71Appendix A The Experimental Results of Writing English Letters --- p.7
From Unimodal to Multimodal: improving the sEMG-Based Pattern Recognition via deep generative models
Multimodal hand gesture recognition (HGR) systems can achieve higher
recognition accuracy. However, acquiring multimodal gesture recognition data
typically requires users to wear additional sensors, thereby increasing
hardware costs. This paper proposes a novel generative approach to improve
Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial
Measurement Unit (IMU) signals. Specifically, we trained a deep generative
model based on the intrinsic correlation between forearm sEMG signals and
forearm IMU signals to generate virtual forearm IMU signals from the input
forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU
signals were fed into a multimodal Convolutional Neural Network (CNN) model for
gesture recognition. To evaluate the performance of the proposed approach, we
conducted experiments on 6 databases, including 5 publicly available databases
and our collected database comprising 28 subjects performing 38 gestures,
containing both sEMG and IMU data. The results show that our proposed approach
outperforms the sEMG-based unimodal HGR method (with increases of
2.15%-13.10%). It demonstrates that incorporating virtual IMU signals,
generated by deep generative models, can significantly enhance the accuracy of
sEMG-based HGR. The proposed approach represents a successful attempt to
transition from unimodal HGR to multimodal HGR without additional sensor
hardware
Low-Cost Indoor Localisation Based on Inertial Sensors, Wi-Fi and Sound
The average life expectancy has been increasing in the last decades, creating the need for
new technologies to improve the quality of life of the elderly. In the Ambient Assisted
Living scope, indoor location systems emerged as a promising technology capable of sup porting the elderly, providing them a safer environment to live in, and promoting their
autonomy. Current indoor location technologies are divided into two categories, depend ing on their need for additional infrastructure. Infrastructure-based solutions require
expensive deployment and maintenance. On the other hand, most infrastructure-free
systems rely on a single source of information, being highly dependent on its availability.
Such systems will hardly be deployed in real-life scenarios, as they cannot handle the
absence of their source of information. An efficient solution must, thus, guarantee the
continuous indoor positioning of the elderly.
This work proposes a new room-level low-cost indoor location algorithm. It relies
on three information sources: inertial sensors, to reconstruct users’ trajectories; environ mental sound, to exploit the unique characteristics of each home division; and Wi-Fi,
to estimate the distance to the Access Point in the neighbourhood. Two data collection
protocols were designed to resemble a real living scenario, and a data processing stage
was applied to the collected data. Then, each source was used to train individual Ma chine Learning (including Deep Learning) algorithms to identify room-level positions.
As each source provides different information to the classification, the data were merged
to produce a more robust localization. Three data fusion approaches (input-level, early,
and late fusion) were implemented for this goal, providing a final output containing
complementary contributions from all data sources.
Experimental results show that the performance improved when more than one source
was used, attaining a weighted F1-score of 81.8% in the localization between seven home
divisions. In conclusion, the evaluation of the developed algorithm shows that it can
achieve accurate room-level indoor localization, being, thus, suitable to be applied in
Ambient Assisted Living scenarios.O aumento da esperança média de vida nas últimas décadas, criou a necessidade de desenvolvimento de tecnologias que permitam melhorar a qualidade de vida dos idosos.
No âmbito da Assistência à Autonomia no Domicílio, sistemas de localização indoor têm
emergido como uma tecnologia promissora capaz de acompanhar os idosos e as suas atividades, proporcionando-lhes um ambiente seguro e promovendo a sua autonomia. As
tecnologias de localização indoor atuais podem ser divididas em duas categorias, aquelas
que necessitam de infrastruturas adicionais e aquelas que não. Sistemas dependentes de
infrastrutura necessitam de implementação e manutenção que são muitas vezes dispendiosas. Por outro lado, a maioria das soluções que não requerem infrastrutura, dependem
de apenas uma fonte de informação, sendo crucial a sua disponibilidade. Um sistema que
não consegue lidar com a falta de informação de um sensor dificilmente será implementado em cenários reais. Uma solução eficiente deverá assim garantir o acompanhamento
contínuo dos idosos.
A solução proposta consiste no desenvolvimento de um algoritmo de localização indoor de baixo custo, baseando-se nas seguintes fontes de informação: sensores inerciais,
capazes de reconstruir a trajetória do utilizador; som, explorando as características dis tintas de cada divisão da casa; e Wi-Fi, responsável pela estimativa da distância entre o
ponto de acesso e o smartphone. Cada fonte sensorial, extraída dos sensores incorpora dos no dispositivo, foi, numa primeira abordagem, individualmente otimizada através de
algoritmos de Machine Learning (incluindo Deep Learning). Como os dados das diversas
fontes contêm informação diferente acerca das mesmas características do sistema, a sua
fusão torna a classificação mais informada e robusta. Com este objetivo, foram implementadas três abordagens de fusão de dados (input data, early and late fusion), fornecendo um
resultado final derivado de contribuições complementares de todas as fontes de dados.
Os resultados experimentais mostram que o desempenho do algoritmo desenvolvido
melhorou com a inclusão de informação multi-sensor, alcançando um valor para F1-
score de 81.8% na distinção entre sete divisões domésticas. Concluindo, o algoritmo de
localização indoor, combinando informações de três fontes diferentes através de métodos
de fusão de dados, alcançou uma localização room-level e está apto para ser aplicado num
cenário de Assistência à Autonomia no Domicílio
- …