482 research outputs found
A dead reckoning localization method for in-pipe detector of water supply pipeline: an application to leak localization
Urban water supply pipeline system integrity is important for the urban life. The aim of the study reported in this paper is to locate the water pipeline leaks by using an in-pipe detector. In this study, a mathematical model is extracted from an actual inspection system. By using the homogeneous transformation theory, transformation matrix which is from carrier to a reference coordinate system is deduced, and then the global transformation matrix is obtained to describe the detector’s posture. Through measuring the distance increment of each sample time step in carrier coordinate system, the cumulative distance result is calculated. After combining the data of the inertial measurement unit (IMU) and odometer, the leak can be located. To improve the accuracy of leak localization, the magnetic markers are implemented about one in each 1 km distance, which provide reference points to be used to compensate accumulative error during the localization process. Furthermore, a dead reckoning localization method combining data of a micro electro-mechanical IMU, three odometers, and magnetic markers is proposed. To verify above localization algorithm, a simulation case study is conducted with the artificial error generated by the white noise. The simulation results show that the dead reckoning algorithm can effectively provide leak locations with a reasonable uncertainty. Based on this, an experimental platform was built in this study. The experimental results show that the relative error of leak locating achieves a reasonably good performanc
SpiroMask: Measuring Lung Function Using Consumer-Grade Masks
According to the World Health Organisation (WHO), 235 million people suffer
from respiratory illnesses and four million people die annually due to air
pollution. Regular lung health monitoring can lead to prognoses about
deteriorating lung health conditions. This paper presents our system SpiroMask
that retrofits a microphone in consumer-grade masks (N95 and cloth masks) for
continuous lung health monitoring. We evaluate our approach on 48 participants
(including 14 with lung health issues) and find that we can estimate parameters
such as lung volume and respiration rate within the approved error range by the
American Thoracic Society (ATS). Further, we show that our approach is robust
to sensor placement inside the mask.Comment: Accepted in the ACM Transactions on Computing for Healthcare (HEALTH
A methodology for the performance evaluation of inertial measurement units
This paper presents a methodology for a reliable comparison among Inertial Measurement Units or attitude estimation devices in a Vicon environment. The misalignment among the reference systems and the lack of synchronization among the devices are the main problems for the correct performance evaluation using Vicon as reference measurement system. We propose a genetic algorithm coupled with Dynamic Time Warping (DTW) to solve these issues. To validate the efficacy of the methodology, a performance comparison is implemented between the WB-3 ultra-miniaturized Inertial Measurement Unit (IMU), developed by our group, with the commercial IMU InertiaCube3â„¢ by InterSense
Recommended from our members
Ultra-Low-Power IoT Solutions for Sound Source Localization: Combining Mixed-Signal Processing and Machine Learning
With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of auditory stimuli that could provide important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. We start this research into building a wearable system that uses multichannel audio sensors embedded in a headset to help detect and locate cars from their honks and engine and tire noises. Based on this detection, the system can warn pedestrians of the imminent danger of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, front-end hardware that performs signal processing and feature extraction, and machine-learning-based classification on a smartphone, we are able to provide early danger detection in real time, from up to 80m distance, with greater than 80% precision and 90% recall, and alert the user on time (about 6s in advance for a car traveling at 30mph).
The time delay between audio signals in a microphone array is the most important feature for sound-source localization. This work also presents a polarity-coincidence, adaptive time-delay estimation (PCC-ATDE) mixed-signal technique that uses 1-bit quantized signals and a negative-feedback architecture to directly determine the time delay between signals in the analog inputs and convert it to a digital number. This direct conversion, without a multibit ADC and further digital-signal processing, allows for ultra low power consumption. A prototype chip in 0:18μm CMOS with 4 analog inputs consumes 78nW with a 3-channel 8-bit digital time-delay output while sampling at 50kHz with a 20μs resolution and 6.06 ENOB. We present a theoretical analysis for the nonlinear, signal-dependent feedback loop of the PCC-ATDE. A delay-domain model of the system is developed to estimate the power bandwidth of the converter and predict its dynamic response. Results are validated with experiments using real-life stimuli, captured with a microphone array, that demonstrate the technique’s ability to localize a sound source. The chip is further integrated in an embedded platform and deployed as an audio-based vehicle-bearing IoT system.
Finally, we investigate the signal’s envelope, an important feature for a host of applications enabled by machine-learning algorithms. Conventionally, the raw analog signal is digitized first, followed by feature extraction in the digital domain. This work presents an ultra-low-power envelope-to-digital converter (EDC) consisting of a passive switched-capacitor envelope detector and an inseparable successive approximation-register analog-to-digital converter (ADC). The two blocks integrate directly at different sampling rates without a buffer between them thanks to the ping-pong operation of their sampling capacitors. An EDC prototype was fabricated in 180nm CMOS. It provides 7.1 effective bits of ADC resolution and supports input signal bandwidth up to 5kHz and an envelope bandwidth up to 50Hz while consuming 9.6nW
Exploring Mechanocardiography as a Tool to Monitor Systolic Function Improvement with Resynchronization Pacing
The thesis explores the utilization of mechanocardiography (MCG) as a novel approach to assess and quantify improvements in systolic cardiac function resulting from cardiac resynchronization therapy (CRT). The study focuses on patients with heart failure and reduced ejection fraction (HFrEF), a population commonly treated with CRT. The primary objective is to investigate the differences in MCG waveforms during CRT and single-chamber atrial (AAI) pacing, specifically comparing waveform characteristics. 10 patients with heart failure and previously implanted CRT pacemakers were included in the study. The MCG and ECG signals are recorded using accelerometers, gyroscopes, and Holter measurement unit placed on the lower chest and used in the analysis. ECG and MCG recordings were obtained during both CRT and AAI pacing at a consistent heart rate of 80 beats per minute. The analysis involved considering six MCG axes and three MCG vectors across various frequency ranges to derive key waveform characteristics such as energy, vertical range, electromechanical systole (QS2), and left ventricular ejection time (LVET). The results revealed significant differences between CRT and AAI pacing, with CRT consistently exhibiting higher energy and vertical range during systole across multiple axes. Notably, the study identified optimal differences in SCG-Y, GCG-X, and GCG-Y axes within the 6–90 Hz frequency range. However, any difference in QS2, LVET and waveform characteristics around aortic valve closure was not identified between the pacing modes.
The findings suggest that MCG waveforms can serve as indicators of improved mechanical cardiac function during CRT. The use of accelerometers and gyroscopes may contribute to the development of a non-invasive and potentially predictive tool for optimizing CRT settings. The promising results underscore the need for further research to explore the differences in signal characteristics between responders and nonresponders to CRT. The overall aim is to enhance the clinical application of MCG, leveraging wearable technology and micro-electromechanical systems (MEMS), and ultimately improve the optimization and efficacy of CRT in heart failure (HF) management
A Multi-Sensor Fusion-Based Underwater Slam System
This dissertation addresses the problem of real-time Simultaneous Localization and Mapping (SLAM) in challenging environments. SLAM is one of the key enabling technologies for autonomous robots to navigate in unknown environments by processing information on their on-board computational units. In particular, we study the exploration of challenging GPS-denied underwater environments to enable a wide range of robotic applications, including historical studies, health monitoring of coral reefs, underwater infrastructure inspection e.g., bridges, hydroelectric dams, water supply systems, and oil rigs. Mapping underwater structures is important in several fields, such as marine archaeology, Search and Rescue (SaR), resource management, hydrogeology, and speleology. However, due to the highly unstructured nature of such environments, navigation by human divers could be extremely dangerous, tedious, and labor intensive. Hence, employing an underwater robot is an excellent fit to build the map of the environment while simultaneously localizing itself in the map.
The main contribution of this dissertation is the design and development of a real-time robust SLAM algorithm for small and large scale underwater environments. SVIn – a novel tightly-coupled keyframe-based non-linear optimization framework fusing Sonar, Visual, Inertial and water depth information with robust initialization, loop-closing, and relocalization capabilities has been presented. Introducing acoustic range information to aid the visual data, shows improved reconstruction and localization. The availability of depth information from water pressure enables a robust initialization and refines the scale factor, as well as assists to reduce the drift for the tightly-coupled integration. The complementary characteristics of these sensing v modalities provide accurate and robust localization in unstructured environments with low visibility and low visual features – as such make them the ideal choice for underwater navigation. The proposed system has been successfully tested and validated in both benchmark datasets and numerous real world scenarios. It has also been used for planning for underwater robot in the presence of obstacles. Experimental results on datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle (AUV) Aqua2 in challenging underwater environments with poor visibility, demonstrate performance never achieved before in terms of accuracy and robustness. To aid the sparse reconstruction, a contour-based reconstruction approach utilizing the well defined edges between the well lit area and darkness has been developed. In particular, low lighting conditions, or even complete absence of natural light inside caves, results in strong lighting variations, e.g., the cone of the artificial video light intersecting underwater structures and the shadow contours. The proposed method utilizes these contours to provide additional features, resulting into a denser 3D point cloud than the usual point clouds from a visual odometry system. Experimental results in an underwater cave demonstrate the performance of our system. This enables more robust navigation of autonomous underwater vehicles using the denser 3D point cloud to detect obstacles and achieve higher resolution reconstructions
Development of an augmented reality guided computer assisted orthopaedic surgery system
Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality
guided computer assisted orthopaedic surgery system – ARgCAOS.
After initial investigation a visible-spectrum single camera tool-mounted tracking
system based upon fiducial planar markers was implemented. The use of
visible-spectrum cameras, as opposed to the infra-red cameras typically used by
surgical tracking systems, allowed the captured image to be streamed to a display in
an intelligible fashion. The tracking information defined the location of physical
objects relative to the camera. Therefore, this information allowed virtual models to
be overlaid onto the camera image. This produced a convincing augmented
experience, whereby the virtual objects appeared to be within the physical world,
moving with both the camera and markers as expected of physical objects.
Analysis of the first generation system identified both accuracy and graphical
inadequacies, prompting the development of a second generation system. This too
was based upon a tool-mounted fiducial marker system, and improved performance
to near-millimetre probing accuracy. A resection system was incorporated into the
system, and utilising the tracking information controlled resection was performed,
producing sub-millimetre accuracies.
Several complications resulted from the tool-mounted approach. Therefore, a third
generation system was developed. This final generation deployed a stereoscopic
visible-spectrum camera system affixed to a head-mounted display worn by the user.
The system allowed the augmentation of the natural view of the user, providing
convincing and immersive three dimensional augmented guidance, with probing and
resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality
guided computer assisted orthopaedic surgery system – ARgCAOS.
After initial investigation a visible-spectrum single camera tool-mounted tracking
system based upon fiducial planar markers was implemented. The use of
visible-spectrum cameras, as opposed to the infra-red cameras typically used by
surgical tracking systems, allowed the captured image to be streamed to a display in
an intelligible fashion. The tracking information defined the location of physical
objects relative to the camera. Therefore, this information allowed virtual models to
be overlaid onto the camera image. This produced a convincing augmented
experience, whereby the virtual objects appeared to be within the physical world,
moving with both the camera and markers as expected of physical objects.
Analysis of the first generation system identified both accuracy and graphical
inadequacies, prompting the development of a second generation system. This too
was based upon a tool-mounted fiducial marker system, and improved performance
to near-millimetre probing accuracy. A resection system was incorporated into the
system, and utilising the tracking information controlled resection was performed,
producing sub-millimetre accuracies.
Several complications resulted from the tool-mounted approach. Therefore, a third
generation system was developed. This final generation deployed a stereoscopic
visible-spectrum camera system affixed to a head-mounted display worn by the user.
The system allowed the augmentation of the natural view of the user, providing
convincing and immersive three dimensional augmented guidance, with probing and
resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively
- …