1,860 research outputs found

    Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs

    Full text link
    We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables 3D human pose estimation using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data. We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.Comment: 12 pages, Accepted at Eurographics 201

    Exploring the use of smart glasses, gesture control, and environmental data in augmented reality games

    Get PDF
    Abstract. In the last decade, augmented reality has become a popular trend. Big corporations like Microsoft, Facebook, and Google started to invest in augmented reality because they saw the potential that it has especially with the rising of the consumer version of the head mounted displays such as Microsoft’s HoloLens and the ODG’s R7. However, there is a gap in the knowledge about the interaction with such devices since they are fairly new and an average consumer cannot yet afford them due to their relatively high prices. In this thesis, the Ghost Hunters game is described. The game is a mobile augmented reality pervasive game that uses the environment light data to charge the in-game “goggles”. The game has two different versions, a smartphone and smart glasses version. The Ghost Hunters game was implemented for exploring the use of two different types of interactions methods, buttons and natural hand gestures for both smartphones and smart glasses. In addition to that, the thesis sought to explore the use of ambient light in augmented reality games. First, the thesis defines the essential concepts related to games and augmented reality based on the literature and then describes the current state of the art of pervasive games and smart glasses. Second, both the design and implementation of the Ghost Hunters game are described in detail. Afterwards, the three rounds of field trials that were conducted to investigate the suitability of the two previously mentioned interaction methods are described and discussed. The findings suggest that smart glasses are more immersive than smartphones in context of pervasive AR games. Moreover, prior AR experience has a significant positive impact on the immersion of smart glasses users. Similarly, males were more immersed in the game than females. Hand gestures were proven to be more usable than the buttons on both devices. However, the interaction method did not affect the game engagement at all, but surprisingly it did affect the way users perceive the UI with smart glasses. Users that used the physical buttons were more likely to notice the UI elements than the users who used the hand gestures

    An inertial motion capture framework for constructing body sensor networks

    Get PDF
    Motion capture is the process of measuring and subsequently reconstructing the movement of an animated object or being in virtual space. Virtual reconstructions of human motion play an important role in numerous application areas such as animation, medical science, ergonomics, etc. While optical motion capture systems are the industry standard, inertial body sensor networks are becoming viable alternatives due to portability, practicality and cost. This thesis presents an innovative inertial motion capture framework for constructing body sensor networks through software environments, smartphones and web technologies. The first component of the framework is a unique inertial motion capture software environment aimed at providing an improved experimentation environment, accompanied by programming scaffolding and a driver development kit, for users interested in studying or engineering body sensor networks. The software environment provides a bespoke 3D engine for kinematic motion visualisations and a set of tools for hardware integration. The software environment is used to develop the hardware behind a prototype motion capture suit focused on low-power consumption and hardware-centricity. Additional inertial measurement units, which are available commercially, are also integrated to demonstrate the functionality the software environment while providing the framework with additional sources for motion data. The smartphone is the most ubiquitous computing technology and its worldwide uptake has prompted many advances in wearable inertial sensing technologies. Smartphones contain gyroscopes, accelerometers and magnetometers, a combination of sensors that is commonly found in inertial measurement units. This thesis presents a mobile application that investigates whether the smartphone is capable of inertial motion capture by constructing a novel omnidirectional body sensor network. This thesis proposes a novel use for web technologies through the development of the Motion Cloud, a repository and gateway for inertial data. Web technologies have the potential to replace motion capture file formats with online repositories and to set a new standard for how motion data is stored. From a single inertial measurement unit to a more complex body sensor network, the proposed architecture is extendable and facilitates the integration of any inertial hardware configuration. The Motion Cloud’s data can be accessed through an application-programming interface or through a web portal that provides users with the functionality for visualising and exporting the motion data

    Augmented reality selection through smart glasses

    Get PDF
    O mercado de Ăłculos inteligentes estĂĄ em crescimento. Este crescimento abre a possibilidade de um dia os Ăłculos inteligentes assumirem um papel mais ativo tal como os smartphones jĂĄ tĂȘm na vida quotidiana das pessoas. VĂĄrios mĂ©todos de interação com esta tecnologia tĂȘm sido estudados, mas ainda nĂŁo Ă© claro qual o mĂ©todo que poderĂĄ ser o melhor para interagir com objetos virtuais. Neste trabalho sĂŁo mencionados diversos estudos que se focam nos diferentes mĂ©todos de interação para aplicaçÔes de realidade aumentada. É dado destaque Ă s tĂ©cnicas de interação para Ăłculos inteligentes tal como Ă s suas vantagens e desvantagens. No contexto deste trabalho foi desenvolvido um protĂłtipo de Realidade Aumentada para locais fechados, implementando trĂȘs mĂ©todos de interação diferentes. Foram tambĂ©m estudadas as preferĂȘncias do utilizador e sua vontade de executar o mĂ©todo de interação em pĂșblico. AlĂ©m disso, Ă© extraĂ­do o tempo de reação que Ă© o tempo entre a deteção de uma marca e o utilizador interagir com ela. Um protĂłtipo de Realidade Aumentada ao ar livre foi desenvolvido a fim compreender os desafios diferentes entre uma aplicação de Realidade Aumentada para ambientes interiores e exteriores. Na discussĂŁo Ă© possĂ­vel entender que os utilizadores se sentem mais confortĂĄveis usando um mĂ©todo de interação semelhante ao que eles jĂĄ usam. No entanto, a solução com dois mĂ©todos de interação, função de toque nos Ăłculos inteligentes e movimento da cabeça, permitem obter resultados prĂłximos aos resultados do controlador. É importante destacar que os utilizadores nĂŁo passaram por uma fase de aprendizagem os resultados apresentados nos testes referem-se sempre Ă  primeira e Ășnica vez com o mĂ©todo de interação. O que leva a crer que o futuro de interação com Ăłculos inteligentes possa ser uma fusĂŁo de diferentes tĂ©cnicas de interação.The smart glasses’ market continues growing. It enables the possibility of someday smart glasses to have a presence as smartphones have already nowadays in people's daily life. Several interaction methods for smart glasses have been studied, but it is not clear which method could be the best to interact with virtual objects. In this research, it is covered studies that focus on the different interaction methods for reality augmented applications. It is highlighted the interaction methods for smart glasses and the advantages and disadvantages of each interaction method. In this work, an Augmented Reality prototype for indoor was developed, implementing three different interaction methods. It was studied the users’ preferences and their willingness to perform the interaction method in public. Besides that, it is extracted the reaction time which is the time between the detection of a marker and the user interact with it. An outdoor Augmented Reality application was developed to understand the different challenges between indoor and outdoor Augmented Reality applications. In the discussion, it is possible to understand that users feel more comfortable using an interaction method similar to what they already use. However, the solution with two interaction methods, smart glass’s tap function, and head movement allows getting results close to the results of the controller. It is important to highlight that was always the first time of the users, so there was no learning before testing. This leads to believe that the future of smart glasses interaction can be the merge of different interaction methods

    Development of a mobile technology system to measure shoulder range of motion

    Get PDF
    In patients with shoulder movement impairment, assessing and monitoring shoulder range of motion is important for determining the severity of impairments due to disease or injury and evaluating the effects of interventions. Current clinical methods of goniometry and visual estimation require an experienced user and suffer from low inter-rater reliability. More sophisticated techniques such as optical or electromagnetic motion capture exist but are expensive and restricted to a specialised laboratory environment.;Inertial measurement units (IMU), such as those within smartphones and smartwatches, show promise as tools bridge the gap between laboratory and clinical techniques and accurately measure shoulder range of motion during both clinic assessments and in daily life.;This study aims to develop an Android mobile application for both a smartphone and a smartwatch to assess shoulder range of motion. Initial performance characterisation of the inertial sensing capabilities of both a smartwatch and smartphone running the application was conducted against an industrial inclinometer, free-swinging pendulum and custom-built servo-powered gimbal.;An initial validation study comparing the smartwatch application with a universal goniometer for shoulder ROM assessment was conducted with twenty healthy participants. An impaired condition was simulated by applying kinesiology tape across the participants shoulder girdle. Agreement, intra and inter-day reliability were assessed in both the healthy and impaired states.;Both the phone and watch performed with acceptable accuracy and repeatability during static (within ±1.1°) and dynamic conditions where it was strongly correlated to the pendulum and gimbal data (ICC > 0.9). Both devices could perform accurately within optimal responsiveness range of angular velocities compliant with humerus movement during activities of daily living (frequency response of 377°/s and 358°/s for the phone and watch respectively).;The concurrent agreement between the watch and the goniometer was high in both healthy and impaired states (ICC > 0.8) and between measurement days (ICC > 0.8). The mean absolute difference between the watch and the goniometer were within the accepted minimal clinically important difference for shoulder movement (5.11° to 10.58°).;The results show promise for the use of the developed Android application to be used as a goniometry tool for assessment of shoulder ROM. However, the limits of agreement across all the tests fell out with the acceptable margin and further investigation is required to determine validity. Evaluation of validity in clinical impairment patients is also required to assess the feasibility of the use of the application in clinical practice.In patients with shoulder movement impairment, assessing and monitoring shoulder range of motion is important for determining the severity of impairments due to disease or injury and evaluating the effects of interventions. Current clinical methods of goniometry and visual estimation require an experienced user and suffer from low inter-rater reliability. More sophisticated techniques such as optical or electromagnetic motion capture exist but are expensive and restricted to a specialised laboratory environment.;Inertial measurement units (IMU), such as those within smartphones and smartwatches, show promise as tools bridge the gap between laboratory and clinical techniques and accurately measure shoulder range of motion during both clinic assessments and in daily life.;This study aims to develop an Android mobile application for both a smartphone and a smartwatch to assess shoulder range of motion. Initial performance characterisation of the inertial sensing capabilities of both a smartwatch and smartphone running the application was conducted against an industrial inclinometer, free-swinging pendulum and custom-built servo-powered gimbal.;An initial validation study comparing the smartwatch application with a universal goniometer for shoulder ROM assessment was conducted with twenty healthy participants. An impaired condition was simulated by applying kinesiology tape across the participants shoulder girdle. Agreement, intra and inter-day reliability were assessed in both the healthy and impaired states.;Both the phone and watch performed with acceptable accuracy and repeatability during static (within ±1.1°) and dynamic conditions where it was strongly correlated to the pendulum and gimbal data (ICC > 0.9). Both devices could perform accurately within optimal responsiveness range of angular velocities compliant with humerus movement during activities of daily living (frequency response of 377°/s and 358°/s for the phone and watch respectively).;The concurrent agreement between the watch and the goniometer was high in both healthy and impaired states (ICC > 0.8) and between measurement days (ICC > 0.8). The mean absolute difference between the watch and the goniometer were within the accepted minimal clinically important difference for shoulder movement (5.11° to 10.58°).;The results show promise for the use of the developed Android application to be used as a goniometry tool for assessment of shoulder ROM. However, the limits of agreement across all the tests fell out with the acceptable margin and further investigation is required to determine validity. Evaluation of validity in clinical impairment patients is also required to assess the feasibility of the use of the application in clinical practice

    Validation of two-dimensional video-based inference of finger kinematics with pose estimation

    Get PDF
    Accurate capture finger of movements for biomechanical assessments has typically been achieved within laboratory environments through the use of physical markers attached to a participant’s hands. However, such requirements can narrow the broader adoption of movement tracking for kinematic assessment outside these laboratory settings, such as in the home. Thus, there is the need for markerless hand motion capture techniques that are easy to use and accurate enough to evaluate the complex movements of the human hand. Several recent studies have validated lower-limb kinematics obtained with a marker-free technique, OpenPose. This investigation examines the accuracy of OpenPose, when applied to images from single RGB cameras, against a ‘gold standard’ marker-based optical motion capture system that is commonly used for hand kinematics estimation. Participants completed four single-handed activities with right and left hands, including hand abduction and adduction, radial walking, metacarpophalangeal (MCP) joint flexion, and thumb opposition. The accuracy of finger kinematics was assessed using the root mean square error. Mean total active flexion was compared using the Bland–Altman approach, and the coefficient of determination of linear regression. Results showed good agreement for abduction and adduction and thumb opposition activities. Lower agreement between the two methods was observed for radial walking (mean difference between the methods of 5.03°) and MCP flexion (mean difference of 6.82°) activities, due to occlusion. This investigation demonstrated that OpenPose, applied to videos captured with monocular cameras, can be used for markerless motion capture for finger tracking with an error below 11° and on the order of that which is accepted clinically

    Beaming Displays

    Get PDF
    Existing near-eye display designs struggle to balance between multiple trade-offs such as form factor, weight, computational requirements, and battery life. These design trade-offs are major obstacles on the path towards an all-day usable near-eye display. In this work, we address these trade-offs by, paradoxically, removing the display from near-eye displays. We present the beaming displays, a new type of near-eye display system that uses a projector and an all passive wearable headset. We modify an off-the-shelf projector with additional lenses. We install such a projector to the environment to beam images from a distance to a passive wearable headset. The beaming projection system tracks the current position of a wearable headset to project distortion-free images with correct perspectives. In our system, a wearable headset guides the beamed images to a user’s retina, which are then perceived as an augmented scene within a user’s field of view. In addition to providing the system design of the beaming display, we provide a physical prototype and show that the beaming display can provide resolutions as high as consumer-level near-eye displays. We also discuss the different aspects of the design space for our proposal
    • 

    corecore