152 research outputs found

    Multidimensional embedded MEMS motion detectors for wearable mechanocardiography and 4D medical imaging

    Get PDF
    Background: Cardiovascular diseases are the number one cause of death. Of these deaths, almost 80% are due to coronary artery disease (CAD) and cerebrovascular disease. Multidimensional microelectromechanical systems (MEMS) sensors allow measuring the mechanical movement of the heart muscle offering an entirely new and innovative solution to evaluate cardiac rhythm and function. Recent advances in miniaturized motion sensors present an exciting opportunity to study novel device-driven and functional motion detection systems in the areas of both cardiac monitoring and biomedical imaging, for example, in computed tomography (CT) and positron emission tomography (PET). Methods: This Ph.D. work describes a new cardiac motion detection paradigm and measurement technology based on multimodal measuring tools — by tracking the heart’s kinetic activity using micro-sized MEMS sensors — and novel computational approaches — by deploying signal processing and machine learning techniques—for detecting cardiac pathological disorders. In particular, this study focuses on the capability of joint gyrocardiography (GCG) and seismocardiography (SCG) techniques that constitute the mechanocardiography (MCG) concept representing the mechanical characteristics of the cardiac precordial surface vibrations. Results: Experimental analyses showed that integrating multisource sensory data resulted in precise estimation of heart rate with an accuracy of 99% (healthy, n=29), detection of heart arrhythmia (n=435) with an accuracy of 95-97%, ischemic disease indication with approximately 75% accuracy (n=22), as well as significantly improved quality of four-dimensional (4D) cardiac PET images by eliminating motion related inaccuracies using MEMS dual gating approach. Tissue Doppler imaging (TDI) analysis of GCG (healthy, n=9) showed promising results for measuring the cardiac timing intervals and myocardial deformation changes. Conclusion: The findings of this study demonstrate clinical potential of MEMS motion sensors in cardiology that may facilitate in time diagnosis of cardiac abnormalities. Multidimensional MCG can effectively contribute to detecting atrial fibrillation (AFib), myocardial infarction (MI), and CAD. Additionally, MEMS motion sensing improves the reliability and quality of cardiac PET imaging.Moniulotteisten sulautettujen MEMS-liiketunnistimien käyttö sydänkardiografiassa sekä lääketieteellisessä 4D-kuvantamisessa Tausta: Sydän- ja verisuonitaudit ovat yleisin kuolinsyy. Näistä kuolemantapauksista lähes 80% johtuu sepelvaltimotaudista (CAD) ja aivoverenkierron häiriöistä. Moniulotteiset mikroelektromekaaniset järjestelmät (MEMS) mahdollistavat sydänlihaksen mekaanisen liikkeen mittaamisen, mikä puolestaan tarjoaa täysin uudenlaisen ja innovatiivisen ratkaisun sydämen rytmin ja toiminnan arvioimiseksi. Viimeaikaiset teknologiset edistysaskeleet mahdollistavat uusien pienikokoisten liiketunnistusjärjestelmien käyttämisen sydämen toiminnan tutkimuksessa sekä lääketieteellisen kuvantamisen, kuten esimerkiksi tietokonetomografian (CT) ja positroniemissiotomografian (PET), tarkkuuden parantamisessa. Menetelmät: Tämä väitöskirjatyö esittelee uuden sydämen kineettisen toiminnan mittaustekniikan, joka pohjautuu MEMS-anturien käyttöön. Uudet laskennalliset lähestymistavat, jotka perustuvat signaalinkäsittelyyn ja koneoppimiseen, mahdollistavat sydämen patologisten häiriöiden havaitsemisen MEMS-antureista saatavista signaaleista. Tässä tutkimuksessa keskitytään erityisesti mekanokardiografiaan (MCG), joihin kuuluvat gyrokardiografia (GCG) ja seismokardiografia (SCG). Näiden tekniikoiden avulla voidaan mitata kardiorespiratorisen järjestelmän mekaanisia ominaisuuksia. Tulokset: Kokeelliset analyysit osoittivat, että integroimalla usean sensorin dataa voidaan mitata syketiheyttä 99% (terveillä n=29) tarkkuudella, havaita sydämen rytmihäiriöt (n=435) 95-97%, tarkkuudella, sekä havaita iskeeminen sairaus noin 75% tarkkuudella (n=22). Lisäksi MEMS-kaksoistahdistuksen avulla voidaan parantaa sydämen 4D PET-kuvan laatua, kun liikeepätarkkuudet voidaan eliminoida paremmin. Doppler-kuvantamisessa (TDI, Tissue Doppler Imaging) GCG-analyysi (terveillä, n=9) osoitti lupaavia tuloksia sydänsykkeen ajoituksen ja intervallien sekä sydänlihasmuutosten mittaamisessa. Päätelmä: Tämän tutkimuksen tulokset osoittavat, että kardiologisilla MEMS-liikeantureilla on kliinistä potentiaalia sydämen toiminnallisten poikkeavuuksien diagnostisoinnissa. Moniuloitteinen MCG voi edistää eteisvärinän (AFib), sydäninfarktin (MI) ja CAD:n havaitsemista. Lisäksi MEMS-liiketunnistus parantaa sydämen PET-kuvantamisen luotettavuutta ja laatua

    Applying multimodal sensing to human location estimation

    Get PDF
    Mobile devices like smartphones and smartwatches are beginning to "stick" to the human body. Given that these devices are equipped with a variety of sensors, they are becoming a natural platform to understand various aspects of human behavior. This dissertation will focus on just one dimension of human behavior, namely "location". We will begin by discussing our research on localizing humans in indoor environments, a problem that requires precise tracking of human footsteps. We investigated the benefits of leveraging smartphone sensors (accelerometers, gyroscopes, magnetometers, etc.) into the indoor localization framework, which breaks away from pure radio frequency based localization (e.g., cellular, WiFi). Our research leveraged inherent properties of indoor environments to perform localization. We also designed additional solutions, where computer vision was integrated with sensor fusion to offer highly precise localization. We will close this thesis with micro-scale tracking of the human wrist and demonstrate how motion data processing is indeed a "double-edged sword", offering unprecedented utility on one hand while breaching privacy on the other

    Enhancing Usability, Security, and Performance in Mobile Computing

    Get PDF
    We have witnessed the prevalence of smart devices in every aspect of human life. However, the ever-growing smart devices present significant challenges in terms of usability, security, and performance. First, we need to design new interfaces to improve the device usability which has been neglected during the rapid shift from hand-held mobile devices to wearables. Second, we need to protect smart devices with abundant private data against unauthorized users. Last, new applications with compute-intensive tasks demand the integration of emerging mobile backend infrastructure. This dissertation focuses on addressing these challenges. First, we present GlassGesture, a system that improves the usability of Google Glass through a head gesture user interface with gesture recognition and authentication. We accelerate the recognition by employing a novel similarity search scheme, and improve the authentication performance by applying new features of head movements in an ensemble learning method. as a result, GlassGesture achieves 96% gesture recognition accuracy. Furthermore, GlassGesture accepts authorized users in nearly 92% of trials, and rejects attackers in nearly 99% of trials. Next, we investigate the authentication between a smartphone and a paired smartwatch. We design and implement WearLock, a system that utilizes one\u27s smartwatch to unlock one\u27s smartphone via acoustic tones. We build an acoustic modem with sub-channel selection and adaptive modulation, which generates modulated acoustic signals to maximize the unlocking success rate against ambient noise. We leverage the motion similarities of the devices to eliminate unnecessary unlocking. We also offload heavy computation tasks from the smartwatch to the smartphone to shorten response time and save energy. The acoustic modem achieves a low bit error rate (BER) of 8%. Compared to traditional manual personal identification numbers (PINs) entry, WearLock not only automates the unlocking but also speeds it up by at least 18%. Last, we consider low-latency video analytics on mobile devices, leveraging emerging mobile backend infrastructure. We design and implement LAVEA, a system which offloads computation from mobile clients to edge nodes, to accomplish tasks with intensive computation at places closer to users in a timely manner. We formulate an optimization problem for offloading task selection and prioritize offloading requests received at the edge node to minimize the response time. We design and compare various task placement schemes for inter-edge collaboration to further improve the overall response time. Our results show that the client-edge configuration has a speedup ranging from 1.3x to 4x against running solely by the client and 1.2x to 1.7x against the client-cloud configuration

    Discovering user mobility and activity in smart lighting environments

    Full text link
    "Smart lighting" environments seek to improve energy efficiency, human productivity and health by combining sensors, controls, and Internet-enabled lights with emerging “Internet-of-Things” technology. Interesting and potentially impactful applications involve adaptive lighting that responds to individual occupants' location, mobility and activity. In this dissertation, we focus on the recognition of user mobility and activity using sensing modalities and analytical techniques. This dissertation encompasses prior work using body-worn inertial sensors in one study, followed by smart-lighting inspired infrastructure sensors deployed with lights. The first approach employs wearable inertial sensors and body area networks that monitor human activities with a user's smart devices. Real-time algorithms are developed to (1) estimate angles of excess forward lean to prevent risk of falls, (2) identify functional activities, including postures, locomotion, and transitions, and (3) capture gait parameters. Two human activity datasets are collected from 10 healthy young adults and 297 elder subjects, respectively, for laboratory validation and real-world evaluation. Results show that these algorithms can identify all functional activities accurately with a sensitivity of 98.96% on the 10-subject dataset, and can detect walking activities and gait parameters consistently with high test-retest reliability (p-value < 0.001) on the 297-subject dataset. The second approach leverages pervasive "smart lighting" infrastructure to track human location and predict activities. A use case oriented design methodology is considered to guide the design of sensor operation parameters for localization performance metrics from a system perspective. Integrating a network of low-resolution time-of-flight sensors in ceiling fixtures, a recursive 3D location estimation formulation is established that links a physical indoor space to an analytical simulation framework. Based on indoor location information, a label-free clustering-based method is developed to learn user behaviors and activity patterns. Location datasets are collected when users are performing unconstrained and uninstructed activities in the smart lighting testbed under different layout configurations. Results show that the activity recognition performance measured in terms of CCR ranges from approximately 90% to 100% throughout a wide range of spatio-temporal resolutions on these location datasets, insensitive to the reconfiguration of environment layout and the presence of multiple users.2017-02-17T00:00:00

    Inferences from Interactions with Smart Devices: Security Leaks and Defenses

    Get PDF
    We unlock our smart devices such as smartphone several times every day using a pin, password, or graphical pattern if the device is secured by one. The scope and usage of smart devices\u27 are expanding day by day in our everyday life and hence the need to make them more secure. In the near future, we may need to authenticate ourselves on emerging smart devices such as electronic doors, exercise equipment, power tools, medical devices, and smart TV remote control. While recent research focuses on developing new behavior-based methods to authenticate these smart devices, pin and password still remain primary methods to authenticate a user on a device. Although the recent research exposes the observation-based vulnerabilities, the popular belief is that the direct observation attacks can be thwarted by simple methods that obscure the attacker\u27s view of the input console (or screen). In this dissertation, we study the users\u27 hand movement pattern while they type on their smart devices. The study concentrates on the following two factors; (1) finding security leaks from the observed hand movement patterns (we showcase that the user\u27s hand movement on its own reveals the user\u27s sensitive information) and (2) developing methods to build lightweight, easy to use, and more secure authentication system. The users\u27 hand movement patterns were captured through video camcorder and inbuilt motion sensors such as gyroscope and accelerometer in the user\u27s device

    Analyzing the Impact of Spatio-Temporal Sensor Resolution on Player Experience in Augmented Reality Games

    Get PDF
    Along with automating everyday tasks of human life, smartphones have become one of the most popular devices to play video games on due to their interactivity. Smartphones are embedded with various sensors which enhance their ability to adopt new new interaction techniques for video games. These integrated sen- sors, such as motion sensors or location sensors, make the device able to adopt new interaction techniques that enhance usability. However, despite their mobility and embedded sensor capacity, smartphones are limited in processing power and display area compared to desktop computer consoles. When it comes to evaluat- ing Player Experience (PX), players might not have as compelling an experience because the rich graphics environments that a desktop computer can provide are absent on a smartphone. A plausible alternative in this regard can be substituting the virtual game world with a real world game board, perceived through the device camera by rendering the digital artifacts over the camera view. This technology is widely known as Augmented Reality (AR). Smartphone sensors (e.g. GPS, accelerometer, gyro-meter, compass) have enhanced the capability for deploying Augmented Reality technology. AR has been applied to a large number of smartphone games including shooters, casual games, or puzzles. Because AR play environments are viewed through the camera, rendering the digital artifacts consistently and accurately is crucial because the digital characters need to move with respect to sensed orientation, then the accelerometer and gyroscope need to provide su ciently accurate and precise readings to make the game playable. In particular, determining the pose of the camera in space is vital as the appropriate angle to view the rendered digital characters are determined by the pose of the camera. This defines how well the players will be able interact with the digital game characters. Depending in the Quality of Service (QoS) of these sensors, the Player Experience (PX) may vary as the rendering of digital characters are affected by noisy sensors causing a loss of registration. Confronting such problem while developing AR games is di cult in general as it requires creating wide variety of game types, narratives, input modalities as well as user-testing. Moreover, current AR games developers do not have any specific guidelines for developing AR games, and concrete guidelines outlining the tradeoffs between QoS and PX for different genres and interaction techniques are required. My dissertation provides a complete view (a taxonomy) of the spatio-temporal sensor resolution depen- dency of the existing AR games. Four user experiments have been conducted and one experiment is proposed to validate the taxonomy and demonstrate the differential impact of sensor noise on gameplay of different genres of AR games in different aspect of PX. This analysis is performed in the context of a novel instru- mentation technology, which allows the controlled manipulation of QoS on position and orientation sensors. The experimental outcome demonstrated how the QoS of input sensor noise impacts the PX differently while playing AR game of different genre and the key elements creating this differential impact are - the input modality, narrative and game mechanics. Later, concrete guidelines are derived to regulate the sensor QoS as complete set of instructions to develop different genres or AR games

    Multimodal Sensing for Robust and Energy-Efficient Context Detection with Smart Mobile Devices

    Get PDF
    Adoption of smart mobile devices (smartphones, wearables, etc.) is rapidly growing. There are already over 2 billion smartphone users worldwide [1] and the percentage of smartphone users is expected to be over 50% in the next five years [2]. These devices feature rich sensing capabilities which allow inferences about mobile device user’s surroundings and behavior. Multiple and diverse sensors common on such mobile devices facilitate observing the environment from different perspectives, which helps to increase robustness of inferences and enables more complex context detection tasks. Though a larger number of sensing modalities can be beneficial for more accurate and wider mobile context detection, integrating these sensor streams is non-trivial. This thesis presents how multimodal sensor data can be integrated to facilitate ro- bust and energy efficient mobile context detection, considering three important and challenging detection tasks: indoor localization, indoor-outdoor detection and human activity recognition. This thesis presents three methods for multimodal sensor inte- gration, each applied for a different type of context detection task considered in this thesis. These are gradually decreasing in design complexity, starting with a solution based on an engineering approach decomposing context detection to simpler tasks and integrating these with a particle filter for indoor localization. This is followed by man- ual extraction of features from different sensors and using an adaptive machine learn- ing technique called semi-supervised learning for indoor-outdoor detection. Finally, a method using deep neural networks capable of extracting non-intuitive features di- rectly from raw sensor data is used for human activity recognition; this method also provides higher degree of generalization to other context detection tasks. Energy efficiency is an important consideration in general for battery powered mo- bile devices and context detection is no exception. In the various context detection tasks and solutions presented in this thesis, particular attention is paid to this issue by relying largely on sensors that consume low energy and on lightweight computations. Overall, the solutions presented improve on the state of the art in terms of accuracy and robustness while keeping the energy consumption low, making them practical for use on mobile devices

    A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired

    Get PDF
    The world has approximately 285 million visually impaired (VI) people according to a report by the World Health Organization. Thirty-nine million people are estimated to be blind, whereas 246 million people are estimated to have impaired vision. An important factor that motivated this research is the fact that 90% of VI people live in developing countries. Several systems have been designed to improve the quality of the life of VI people and support the mobility of VI people. Unfortunately, none of these systems provides a complete solution for VI people, and the systems are very expensive. Therefore, this work presents an intelligent framework that includes several types of sensors embedded in a wearable device to support the visually impaired (VI) community. The proposed work is based on an integration of sensor-based and computer vision-based techniques in order to introduce an efficient and economical visual device. The designed algorithm is divided to two components: obstacle detection and collision avoidance. The system has been implemented and tested in real-time scenarios. A video dataset of 30 videos and an average of 700 frames per video was fed to the system for the testing purpose. The achieved 96.53% accuracy rate of the proposed sequence of techniques that are used for real-time detection component is based on a wide detection view that used two camera modules and a detection range of approximately 9 meters. The 98% accuracy rate was obtained for a larger dataset. However, the main contribution in this work is the proposed novel collision avoidance approach that is based on the image depth and fuzzy control rules. Through the use of x-y coordinate system, we were able to map the input frames, whereas each frame was divided into three areas vertically and further 1/3 of the height of that frame horizontally in order to specify the urgency of any existing obstacles within that frame. In addition, we were able to provide precise information to help the VI user in avoiding front obstacles using the fuzzy logic. The strength of this proposed approach is that it aids the VI users in avoiding 100% of all detected objects. Once the device is initialized, the VI user can confidently enter unfamiliar surroundings. Therefore, this implemented device can be described as accurate, reliable, friendly, light, and economically accessible that facilitates the mobility of VI people and does not require any previous knowledge of the surrounding environment. Finally, our proposed approach was compared with most efficient introduced techniques and proved to outperform them
    corecore