893 research outputs found

    Sensing and Signal Processing in Smart Healthcare

    Get PDF
    In the last decade, we have witnessed the rapid development of electronic technologies that are transforming our daily lives. Such technologies are often integrated with various sensors that facilitate the collection of human motion and physiological data and are equipped with wireless communication modules such as Bluetooth, radio frequency identification, and near-field communication. In smart healthcare applications, designing ergonomic and intuitive humanโ€“computer interfaces is crucial because a system that is not easy to use will create a huge obstacle to adoption and may significantly reduce the efficacy of the solution. Signal and data processing is another important consideration in smart healthcare applications because it must ensure high accuracy with a high level of confidence in order for the applications to be useful for clinicians in making diagnosis and treatment decisions. This Special Issue is a collection of 10 articles selected from a total of 26 contributions. These contributions span the areas of signal processing and smart healthcare systems mostly contributed by authors from Europe, including Italy, Spain, France, Portugal, Romania, Sweden, and Netherlands. Authors from China, Korea, Taiwan, Indonesia, and Ecuador are also included

    Direct communication radio Iinterface for new radio multicasting and cooperative positioning

    Get PDF
    Cotutela: Universidad de defensa UNIVERSITAโ€™ MEDITERRANEA DI REGGIO CALABRIARecently, the popularity of Millimeter Wave (mmWave) wireless networks has increased due to their capability to cope with the escalation of mobile data demands caused by the unprecedented proliferation of smart devices in the fifth-generation (5G). Extremely high frequency or mmWave band is a fundamental pillar in the provision of the expected gigabit data rates. Hence, according to both academic and industrial communities, mmWave technology, e.g., 5G New Radio (NR) and WiGig (60 GHz), is considered as one of the main components of 5G and beyond networks. Particularly, the 3rd Generation Partnership Project (3GPP) provides for the use of licensed mmWave sub-bands for the 5G mmWave cellular networks, whereas IEEE actively explores the unlicensed band at 60 GHz for the next-generation wireless local area networks. In this regard, mmWave has been envisaged as a new technology layout for real-time heavy-traffic and wearable applications. This very work is devoted to solving the problem of mmWave band communication system while enhancing its advantages through utilizing the direct communication radio interface for NR multicasting, cooperative positioning, and mission-critical applications. The main contributions presented in this work include: (i) a set of mathematical frameworks and simulation tools to characterize multicast traffic delivery in mmWave directional systems; (ii) sidelink relaying concept exploitation to deal with the channel condition deterioration of dynamic multicast systems and to ensure mission-critical and ultra-reliable low-latency communications; (iii) cooperative positioning techniques analysis for enhancing cellular positioning accuracy for 5G+ emerging applications that require not only improved communication characteristics but also precise localization. Our study indicates the need for additional mechanisms/research that can be utilized: (i) to further improve multicasting performance in 5G/6G systems; (ii) to investigate sideline aspects, including, but not limited to, standardization perspective and the next relay selection strategies; and (iii) to design cooperative positioning systems based on Device-to-Device (D2D) technology

    First responders occupancy, activity and vital signs monitoring - SAFESENS

    Get PDF
    This paper describes the development and implementation of the SAFESENS (Sensor Technologies for Enhanced Safety and Security of Buildings and its Occupants) location tracking and first responder monitoring demonstrator. An international research collaboration has developed a stateof-the-art wireless indoor location tracking system for first responders, focused initially on fire fighter monitoring. Integrating multiple gas sensors and presence detection technologies with building safety sensors and personal monitors has resulted in more accurate and reliable fire and occupancy detection information. This is invaluable to firefighters in carrying out their duties in hostile environments. This demonstration system is capable of tracking occupancy levels in an indoor environment as well as the specific location of fire fighters within those buildings, using a multi-sensor hybrid tracking system. This ultra-wideband indoor tracking system is one of the first of itsรข kind to provide indoor localization capability to sub meter accuracies with combined Bluetooth low energy capability for low power communications and additional inertial, temperature and pressure sensors. This facilitates increased precision in accuracy detection through data fusion, as well as the capability to communicate directly with smartphones and the cloud, without the need for additional gateway support. Glove based, wearable technology has been developed to monitor the vital signs of the first responder and provide this data in real time. The helmet mounted, wearable technology will also incorporate novel electrochemical sensors which have been developed to be able to monitor the presence of dangerous gases in the vicinity of the firefighter and again to provide this information in real time to the fire fighter controller. A SAFESENS demonstrator is currently deployed in Tyndall and is providing real time occupancy levels of the different areas in the building, as well as the capability to track the location of the first responders, their health and the presence of explosive gases in their vicinity. This paper describes the system building blocks and results obtained from the first responder tracking system demonstrator depicted

    Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges

    Full text link
    Artificial General Intelligence (AGI), possessing the capacity to comprehend, learn, and execute tasks with human cognitive abilities, engenders significant anticipation and intrigue across scientific, commercial, and societal arenas. This fascination extends particularly to the Internet of Things (IoT), a landscape characterized by the interconnection of countless devices, sensors, and systems, collectively gathering and sharing data to enable intelligent decision-making and automation. This research embarks on an exploration of the opportunities and challenges towards achieving AGI in the context of the IoT. Specifically, it starts by outlining the fundamental principles of IoT and the critical role of Artificial Intelligence (AI) in IoT systems. Subsequently, it delves into AGI fundamentals, culminating in the formulation of a conceptual framework for AGI's seamless integration within IoT. The application spectrum for AGI-infused IoT is broad, encompassing domains ranging from smart grids, residential environments, manufacturing, and transportation to environmental monitoring, agriculture, healthcare, and education. However, adapting AGI to resource-constrained IoT settings necessitates dedicated research efforts. Furthermore, the paper addresses constraints imposed by limited computing resources, intricacies associated with large-scale IoT communication, as well as the critical concerns pertaining to security and privacy

    Wearable and BAN Sensors for Physical Rehabilitation and eHealth Architectures

    Get PDF
    The demographic shift of the population towards an increase in the number of elderly citizens, together with the sedentary lifestyle we are adopting, is reflected in the increasingly debilitated physical health of the population. The resulting physical impairments require rehabilitation therapies which may be assisted by the use of wearable sensors or body area network sensors (BANs). The use of novel technology for medical therapies can also contribute to reducing the costs in healthcare systems and decrease patient overflow in medical centers. Sensors are the primary enablers of any wearable medical device, with a central role in eHealth architectures. The accuracy of the acquired data depends on the sensors; hence, when considering wearable and BAN sensing integration, they must be proven to be accurate and reliable solutions. This book is a collection of works focusing on the current state-of-the-art of BANs and wearable sensing devices for physical rehabilitation of impaired or debilitated citizens. The manuscripts that compose this book report on the advances in the research related to different sensing technologies (optical or electronic) and body area network sensors (BANs), their design and implementation, advanced signal processing techniques, and the application of these technologies in areas such as physical rehabilitation, robotics, medical diagnostics, and therapy

    Predicting Creativity in the Wild: Experience Sampling Method and Sociometric Modeling of Movement and Face-To-Face Interactions in Teams

    Get PDF
    abstract: With the rapid growth of mobile computing and sensor technology, it is now possible to access data from a variety of sources. A big challenge lies in linking sensor based data with social and cognitive variables in humans in real world context. This dissertation explores the relationship between creativity in teamwork, and team members' movement and face-to-face interaction strength in the wild. Using sociometric badges (wearable sensors), electronic Experience Sampling Methods (ESM), the KEYS team creativity assessment instrument, and qualitative methods, three research studies were conducted in academic and industry R&D; labs. Sociometric badges captured movement of team members and face-to-face interaction between team members. KEYS scale was implemented using ESM for self-rated creativity and expert-coded creativity assessment. Activities (movement and face-to-face interaction) and creativity of one five member and two seven member teams were tracked for twenty five days, eleven days, and fifteen days respectively. Day wise values of movement and face-to-face interaction for participants were mean split categorized as creative and non-creative using self- rated creativity measure and expert-coded creativity measure. Paired-samples t-tests [t(36) = 3.132, p < 0.005; t(23) = 6.49 , p < 0.001] confirmed that average daily movement energy during creative days (M = 1.31, SD = 0.04; M = 1.37, SD = 0.07) was significantly greater than the average daily movement of non-creative days (M = 1.29, SD = 0.03; M = 1.24, SD = 0.09). The eta squared statistic (0.21; 0.36) indicated a large effect size. A paired-samples t-test also confirmed that face-to-face interaction tie strength of team members during creative days (M = 2.69, SD = 4.01) is significantly greater [t(41) = 2.36, p < 0.01] than the average face-to-face interaction tie strength of team members for non-creative days (M = 0.9, SD = 2.1). The eta squared statistic (0.11) indicated a large effect size. The combined approach of principal component analysis (PCA) and linear discriminant analysis (LDA) conducted on movement and face-to-face interaction data predicted creativity with 87.5% and 91% accuracy respectively. This work advances creativity research and provides a foundation for sensor based real-time creativity support tools for teams.Dissertation/ThesisPh.D. Computer Science 201

    Progetto di reti Sensori Wireless e tecniche di Fusione Sensoriale

    Get PDF
    Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques

    Fusion of wearable and visual sensors for human motion analysis

    No full text
    Human motion analysis is concerned with the study of human activity recognition, human motion tracking, and the analysis of human biomechanics. Human motion analysis has applications within areas of entertainment, sports, and healthcare. For example, activity recognition, which aims to understand and identify different tasks from motion can be applied to create records of staff activity in the operating theatre at a hospital; motion tracking is already employed in some games to provide an improved user interaction experience and can be used to study how medical staff interact in the operating theatre; and human biomechanics, which is the study of the structure and function of the human body, can be used to better understand athlete performance, pathologies in certain patients, and assess the surgical skill of medical staff. As health services strive to improve the quality of patient care and meet the growing demands required to care for expanding populations around the world, solutions that can improve patient care, diagnosis of pathology, and the monitoring and training of medical staff are necessary. Surgical workflow analysis, for example, aims to assess and optimise surgical protocols in the operating theatre by evaluating the tasks that staff perform and measurable outcomes. Human motion analysis methods can be used to quantify the activities and performance of staff for surgical workflow analysis; however, a number of challenges must be overcome before routine motion capture of staff in an operating theatre becomes feasible. Current commercial human motion capture technologies have demonstrated that they are capable of acquiring human movement with sub-centimetre accuracy; however, the complicated setup procedures, size, and embodiment of current systems make them cumbersome and unsuited for routine deployment within an operating theatre. Recent advances in pervasive sensing have resulted in camera systems that can detect and analyse human motion, and small wear- able sensors that can measure a variety of parameters from the human body, such as heart rate, fatigue, balance, and motion. The work in this thesis investigates different methods that enable human motion to be more easily, reliably, and accurately captured through ambient and wearable sensor technologies to address some of the main challenges that have limited the use of motion capture technologies in certain areas of study. Sensor embodiment and accuracy of activity recognition is one of the challenges that affect the adoption of wearable devices for monitoring human activity. Using a single inertial sensor, which captures the movement of the subject, a variety of motion characteristics can be measured. For patients, wearable inertial sensors can be used in long-term activity monitoring to better understand the condition of the patient and potentially identify deviations from normal activity. For medical staff, inertial sensors can be used to capture tasks being performed for automated workflow analysis, which is useful for staff training, optimisation of existing processes, and early indications of complications within clinical procedures. Feature extraction and classification methods are introduced in thesis that demonstrate motion classification accuracies of over 90% for five different classes of walking motion using a single ear-worn sensor. To capture human body posture, current capture systems generally require a large number of sensors or reflective reference markers to be worn on the body, which presents a challenge for many applications, such as monitoring human motion in the operating theatre, as they may restrict natural movements and make setup complex and time consuming. To address this, a method is proposed, which uses a regression method to estimate motion using a subset of fewer wearable inertial sensors. This method is demonstrated using three sensors on the upper body and is shown to achieve mean estimation accuracies as low as 1.6cm, 1.1cm, and 1.4cm for the hand, elbow, and shoulders, respectively, when compared with the gold standard optical motion capture system. Using a subset of three sensors, mean errors for hand position reach 15.5cm. Unlike human motion capture systems that rely on vision and reflective reference point markers, commonly known as marker-based optical motion capture, wearable inertial sensors are prone to inaccuracies resulting from an accumulation of inaccurate measurements, which becomes increasingly prevalent over time. Two methods are introduced in this thesis, which aim to solve this challenge using visual rectification of the assumed state of the subject. Using a ceiling-mounted camera, a human detection and human motion tracking method is introduced to improve the average mean accuracy of tracking to within 5.8cm in a laboratory of 3m ร— 5m. To improve the accuracy of capturing the position of body parts and posture for human biomechanics, a camera is also utilised to track the body part movements and provide visual rectification of human pose estimates from inertial sensing. For most subjects, deviations of less than 10% from the ground truth are achieved for hand positions, which exhibit the greatest error, and the occurrence of sources of other common visual and inertial estimation errors, such as measurement noise, visual occlusion, and sensor calibration are shown to be reduced.Open Acces

    ์ ๋ถ„ ๋ฐ ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ• ์œตํ•ฉ์„ ์ด์šฉํ•œ ์Šค๋งˆํŠธํฐ ๋‹ค์ค‘ ๋™์ž‘์—์„œ ๋ณดํ–‰ ํ•ญ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2020. 8. ๋ฐ•์ฐฌ๊ตญ.In this dissertation, an IA-PA fusion-based PDR (Pedestrian Dead Reckoning) using low-cost inertial sensors is proposed to improve the indoor position estimation. Specifically, an IA (Integration Approach)-based PDR algorithm combined with measurements from PA (Parametric Approach) is constructed so that the algorithm is operated even in various poses that occur when a pedestrian moves with a smartphone indoors. In addition, I propose an algorithm that estimates the device attitude robustly in a disturbing situation by an ellipsoidal method. In addition, by using the machine learning-based pose recognition, it is possible to improve the position estimation performance by varying the measurement update according to the poses. First, I propose an adaptive attitude estimation based on ellipsoid technique to accurately estimate the direction of movement of a smartphone device. The AHRS (Attitude and Heading Reference System) uses an accelerometer and a magnetometer as measurements to calculate the attitude based on the gyro and to compensate for drift caused by gyro sensor errors. In general, the attitude estimation performance is poor in acceleration and geomagnetic disturbance situations, but in order to effectively improve the estimation performance, this dissertation proposes an ellipsoid-based adaptive attitude estimation technique. When a measurement disturbance comes in, it is possible to update the measurement more accurately than the adaptive estimation technique without considering the direction by adjusting the measurement covariance with the ellipsoid method considering the direction of the disturbance. In particular, when the disturbance only comes in one axis, the proposed algorithm can use the measurement partly by updating the other two axes considering the direction. The proposed algorithm shows its effectiveness in attitude estimation under disturbances through the rate table and motion capture equipment. Next, I propose a PDR algorithm that integrates IA and PA that can be operated in various poses. When moving indoors using a smartphone, there are many degrees of freedom, so various poses such as making a phone call, texting, and putting a pants pocket are possible. In the existing smartphone-based positioning algorithms, the position is estimated based on the PA, which can be used only when the pedestrian's walking direction and the device's direction coincide, and if it does not, the position error due to the mismatch in angle is large. In order to solve this problem, this dissertation proposes an algorithm that constructs state variables based on the IA and uses the position vector from the PA as a measurement. If the walking direction and the device heading do not match based on the pose recognized through machine learning technique, the position is updated in consideration of the direction calculated using PCA (Principal Component Analysis) and the step length obtained through the PA. It can be operated robustly even in various poses that occur. Through experiments considering various operating conditions and paths, it is confirmed that the proposed method stably estimates the position and improves performance even in various indoor environments.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ €๊ฐ€ํ˜• ๊ด€์„ฑ์„ผ์„œ๋ฅผ ์ด์šฉํ•œ ๋ณดํ–‰ํ•ญ๋ฒ•์‹œ์Šคํ…œ (PDR: Pedestrian Dead Reckoning)์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ ๋ณดํ–‰์ž๊ฐ€ ์‹ค๋‚ด์—์„œ ์Šค๋งˆํŠธํฐ์„ ๋“ค๊ณ  ์ด๋™ํ•  ๋•Œ ๋ฐœ์ƒํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋™์ž‘ ์ƒํ™ฉ์—์„œ๋„ ์šด์šฉ๋  ์ˆ˜ ์žˆ๋„๋ก, ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฐ˜ ์ธก์ •์น˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์ ๋ถ„ ๊ธฐ๋ฐ˜์˜ ๋ณดํ–‰์ž ํ•ญ๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ตฌ์„ฑํ•œ๋‹ค. ๋˜ํ•œ ํƒ€์›์ฒด ๊ธฐ๋ฐ˜ ์ž์„ธ ์ถ”์ • ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ตฌ์„ฑํ•˜์—ฌ ์™ธ๋ž€ ์ƒํ™ฉ์—์„œ๋„ ๊ฐ•์ธํ•˜๊ฒŒ ์ž์„ธ๋ฅผ ์ถ”์ •ํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ์ถ”๊ฐ€์ ์œผ๋กœ ๊ธฐ๊ณ„ํ•™์Šต ๊ธฐ๋ฐ˜์˜ ๋™์ž‘ ์ธ์‹ ์ •๋ณด๋ฅผ ์ด์šฉ, ๋™์ž‘์— ๋”ฐ๋ฅธ ์ธก์ •์น˜ ์—…๋ฐ์ดํŠธ๋ฅผ ๋‹ฌ๋ฆฌํ•จ์œผ๋กœ์จ ์œ„์น˜ ์ถ”์ • ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค. ๋จผ์ € ์Šค๋งˆํŠธํฐ ๊ธฐ๊ธฐ์˜ ์ด๋™ ๋ฐฉํ–ฅ์„ ์ •ํ™•ํ•˜๊ฒŒ ์ถ”์ •ํ•˜๊ธฐ ์œ„ํ•ด ํƒ€์›์ฒด ๊ธฐ๋ฒ• ๊ธฐ๋ฐ˜ ์ ์‘ ์ž์„ธ ์ถ”์ •์„ ์ œ์•ˆํ•œ๋‹ค. ์ž์„ธ ์ถ”์ • ๊ธฐ๋ฒ• (AHRS: Attitude and Heading Reference System)์€ ์ž์ด๋กœ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ž์„ธ๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ์ž์ด๋กœ ์„ผ์„œ์˜ค์ฐจ์— ์˜ํ•ด ๋ฐœ์ƒํ•˜๋Š” ๋“œ๋ฆฌํ”„ํŠธ๋ฅผ ๋ณด์ •ํ•˜๊ธฐ ์œ„ํ•ด ์ธก์ •์น˜๋กœ ๊ฐ€์†๋„๊ณ„์™€ ์ง€์ž๊ณ„๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค. ์ผ๋ฐ˜์ ์œผ๋กœ ๊ฐ€์† ๋ฐ ์ง€์ž๊ณ„ ์™ธ๋ž€ ์ƒํ™ฉ์—์„œ๋Š” ์ž์„ธ ์ถ”์ • ์„ฑ๋Šฅ์ด ๋–จ์–ด์ง€๋Š”๋ฐ, ์ถ”์ • ์„ฑ๋Šฅ์„ ํšจ๊ณผ์ ์œผ๋กœ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํƒ€์›์ฒด ๊ธฐ๋ฐ˜ ์ ์‘ ์ž์„ธ ์ถ”์ • ๊ธฐ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ธก์ •์น˜ ์™ธ๋ž€์ด ๋“ค์–ด์˜ค๋Š” ๊ฒฝ์šฐ, ์™ธ๋ž€์˜ ๋ฐฉํ–ฅ์„ ๊ณ ๋ คํ•˜์—ฌ ํƒ€์›์ฒด ๊ธฐ๋ฒ•์œผ๋กœ ์ธก์ •์น˜ ๊ณต๋ถ„์‚ฐ์„ ์กฐ์ •ํ•ด์คŒ์œผ๋กœ์จ ๋ฐฉํ–ฅ์„ ๊ณ ๋ คํ•˜์ง€ ์•Š์€ ์ ์‘ ์ถ”์ • ๊ธฐ๋ฒ•๋ณด๋‹ค ์ •ํ™•ํ•˜๊ฒŒ ์ธก์ •์น˜ ์—…๋ฐ์ดํŠธ๋ฅผ ํ•  ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ ์™ธ๋ž€์ด ํ•œ ์ถ•์œผ๋กœ๋งŒ ๋“ค์–ด์˜ค๋Š” ๊ฒฝ์šฐ, ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋ฐฉํ–ฅ์„ ๊ณ ๋ คํ•ด ๋‚˜๋จธ์ง€ ๋‘ ์ถ•์— ๋Œ€ํ•ด์„œ๋Š” ์—…๋ฐ์ดํŠธ ํ•ด์คŒ์œผ๋กœ์จ ์ธก์ •์น˜๋ฅผ ๋ถ€๋ถ„์ ์œผ๋กœ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ ˆ์ดํŠธ ํ…Œ์ด๋ธ”, ๋ชจ์…˜ ์บก์ณ ์žฅ๋น„๋ฅผ ํ†ตํ•ด ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์ž์„ธ ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋จ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋‹ค์Œ์œผ๋กœ ๋‹ค์–‘ํ•œ ๋™์ž‘์—์„œ๋„ ์šด์šฉ ๊ฐ€๋Šฅํ•œ ์ ๋ถ„ ๋ฐ ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ•์„ ์œตํ•ฉํ•˜๋Š” ๋ณดํ–‰ํ•ญ๋ฒ• ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ์Šค๋งˆํŠธํฐ์„ ์ด์šฉํ•ด ์‹ค๋‚ด๋ฅผ ์ด๋™ํ•  ๋•Œ์—๋Š” ์ž์œ ๋„๊ฐ€ ํฌ๊ธฐ ๋•Œ๋ฌธ์— ์ „ํ™” ๊ฑธ๊ธฐ, ๋ฌธ์ž, ๋ฐ”์ง€ ์ฃผ๋จธ๋‹ˆ ๋„ฃ๊ธฐ ๋“ฑ ๋‹ค์–‘ํ•œ ๋™์ž‘์ด ๋ฐœ์ƒ ๊ฐ€๋Šฅํ•˜๋‹ค. ๊ธฐ์กด์˜ ์Šค๋งˆํŠธํฐ ๊ธฐ๋ฐ˜ ๋ณดํ–‰ ํ•ญ๋ฒ•์—์„œ๋Š” ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ•์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๋Š”๋ฐ, ์ด๋Š” ๋ณดํ–‰์ž์˜ ์ง„ํ–‰ ๋ฐฉํ–ฅ๊ณผ ๊ธฐ๊ธฐ์˜ ๋ฐฉํ–ฅ์ด ์ผ์น˜ํ•˜๋Š” ๊ฒฝ์šฐ์—๋งŒ ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•˜๋ฉฐ ์ผ์น˜ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ ์ž์„ธ ์˜ค์ฐจ๋กœ ์ธํ•œ ์œ„์น˜ ์˜ค์ฐจ๊ฐ€ ํฌ๊ฒŒ ๋ฐœ์ƒํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ ๋ถ„ ๊ธฐ๋ฐ˜ ๊ธฐ๋ฒ•์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์ƒํƒœ๋ณ€์ˆ˜๋ฅผ ๊ตฌ์„ฑํ•˜๊ณ  ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ•์„ ํ†ตํ•ด ๋‚˜์˜ค๋Š” ์œ„์น˜ ๋ฒกํ„ฐ๋ฅผ ์ธก์ •์น˜๋กœ ์‚ฌ์šฉํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ œ์•ˆํ•œ๋‹ค. ๋งŒ์•ฝ ๊ธฐ๊ณ„ํ•™์Šต์„ ํ†ตํ•ด ์ธ์‹ํ•œ ๋™์ž‘์„ ๋ฐ”ํƒ•์œผ๋กœ ์ง„ํ–‰ ๋ฐฉํ–ฅ๊ณผ ๊ธฐ๊ธฐ ๋ฐฉํ–ฅ์ด ์ผ์น˜ํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ, ์ฃผ์„ฑ๋ถ„ ๋ถ„์„์„ ํ†ตํ•ด ๊ณ„์‚ฐํ•œ ์ง„ํ–‰๋ฐฉํ–ฅ์„ ์ด์šฉํ•ด ์ง„ํ–‰ ๋ฐฉํ–ฅ์„, ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ธฐ๋ฒ•์„ ํ†ตํ•ด ์–ป์€ ๋ณดํญ์œผ๋กœ ๊ฑฐ๋ฆฌ๋ฅผ ์—…๋ฐ์ดํŠธํ•ด ์คŒ์œผ๋กœ์จ ๋ณดํ–‰ ์ค‘ ๋ฐœ์ƒํ•˜๋Š” ์—ฌ๋Ÿฌ ๋™์ž‘์—์„œ๋„ ๊ฐ•์ธํ•˜๊ฒŒ ์šด์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋‹ค์–‘ํ•œ ๋™์ž‘ ์ƒํ™ฉ ๋ฐ ๊ฒฝ๋กœ๋ฅผ ๊ณ ๋ คํ•œ ์‹คํ—˜์„ ํ†ตํ•ด ์œ„์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์ด ๋‹ค์–‘ํ•œ ์‹ค๋‚ด ํ™˜๊ฒฝ์—์„œ๋„ ์•ˆ์ •์ ์œผ๋กœ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๊ณ  ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋จ์„ ํ™•์ธํ•˜์˜€๋‹ค.Chapter 1 Introduction 1 1.1 Motivation and Background 1 1.2 Objectives and Contribution 5 1.3 Organization of the Dissertation 6 Chapter 2 Pedestrian Dead Reckoning System 8 2.1 Overview of Pedestrian Dead Reckoning 8 2.2 Parametric Approach 9 2.2.1 Step detection algorithm 11 2.2.2 Step length estimation algorithm 13 2.2.3 Heading estimation 14 2.3 Integration Approach 15 2.3.1 Extended Kalman filter 16 2.3.2 INS-EKF-ZUPT 19 2.4 Activity Recognition using Machine Learning 21 2.4.1 Challenges in HAR 21 2.4.2 Activity recognition chain 22 Chapter 3 Attitude Estimation in Smartphone 26 3.1 Adaptive Attitude Estimation in Smartphone 26 3.1.1 Indirect Kalman filter-based attitude estimation 26 3.1.2 Conventional attitude estimation algorithms 29 3.1.3 Adaptive attitude estimation using ellipsoidal methods 30 3.2 Experimental Results 36 3.2.1 Simulation 36 3.2.2 Rate table experiment 44 3.2.3 Handheld rotation experiment 46 3.2.4 Magnetic disturbance experiment 49 3.3 Summary 53 Chapter 4 Pedestrian Dead Reckoning in Multiple Poses of a Smartphone 54 4.1 System Overview 55 4.2 Machine Learning-based Pose Classification 56 4.2.1 Training dataset 57 4.2.2 Feature extraction and selection 58 4.2.3 Pose classification result using supervised learning in PDR 62 4.3 Fusion of the Integration and Parametric Approaches in PDR 65 4.3.1 System model 67 4.3.2 Measurement model 67 4.3.3 Mode selection 74 4.3.4 Observability analysis 76 4.4 Experimental Results 82 4.4.1 AHRS results 82 4.4.2 PCA results 84 4.4.3 IA-PA results 88 4.5 Summary 100 Chapter 5 Conclusions 103 5.1 Summary of the Contributions 103 5.2 Future Works 105 ๊ตญ๋ฌธ์ดˆ๋ก 125 Acknowledgements 127Docto

    Methods for monitoring the human circadian rhythm in free-living

    Get PDF
    Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32ยฑ17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40ยฑ48 min and wake error of 42ยฑ57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle
    • โ€ฆ
    corecore