2,484 research outputs found

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Multi-Sensor Context-Awareness in Mobile Devices and Smart Artefacts

    Get PDF
    The use of context in mobile devices is receiving increasing attention in mobile and ubiquitous computing research. In this article we consider how to augment mobile devices with awareness of their environment and situation as context. Most work to date has been based on integration of generic context sensors, in particular for location and visual context. We propose a different approach based on integration of multiple diverse sensors for awareness of situational context that can not be inferred from location, and targeted at mobile device platforms that typically do not permit processing of visual context. We have investigated multi-sensor context-awareness in a series of projects, and report experience from development of a number of device prototypes. These include development of an awareness module for augmentation of a mobile phone, of the Mediacup exemplifying context-enabled everyday artifacts, and of the Smart-Its platform for aware mobile devices. The prototypes have been explored in various applications to validate the multi-sensor approach to awareness, and to develop new perspectives of how embedded context-awareness can be applied in mobile and ubiquitous computing

    Wearables as Augmentation Means: Conceptual Definition, Pathways, and Research Framework

    Get PDF
    Wearables pervade many facets of human endeavor, thanks to their integration into everyday artifacts and activities. From fitness bands to medical patches, to augmented reality glasses, wearables have demonstrated immense potential for intelligence augmentation (IA) through human-machine symbiosis. To advance an understanding of how wearables engender IA and to provide a solid foundation for grounding IS research on wearables and IA, this study draws from Engelbart’s framework for augmenting human intellect to: (1) develop a conceptual definition of wearable technology as a digitally enhanced body-borne device that can augment a human or non-human capability by affording context sensitivity, mobility, hands-free interaction, and constancy of operation, (2) extend Engelbart’s framework to the sociomaterial domain to account for the emergence of augmented capabilities that are neither wholly social nor wholly material, and (3) propose and elaborate four augmentation pathways —complementation, supplementation, mediation, and mutual constitution—to facilitate IA research

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)

    Game Theory Solutions in Sensor-Based Human Activity Recognition: A Review

    Full text link
    The Human Activity Recognition (HAR) tasks automatically identify human activities using the sensor data, which has numerous applications in healthcare, sports, security, and human-computer interaction. Despite significant advances in HAR, critical challenges still exist. Game theory has emerged as a promising solution to address these challenges in machine learning problems including HAR. However, there is a lack of research work on applying game theory solutions to the HAR problems. This review paper explores the potential of game theory as a solution for HAR tasks, and bridges the gap between game theory and HAR research work by suggesting novel game-theoretic approaches for HAR problems. The contributions of this work include exploring how game theory can improve the accuracy and robustness of HAR models, investigating how game-theoretic concepts can optimize recognition algorithms, and discussing the game-theoretic approaches against the existing HAR methods. The objective is to provide insights into the potential of game theory as a solution for sensor-based HAR, and contribute to develop a more accurate and efficient recognition system in the future research directions

    Wearable Intrinsically Soft, Stretchable, Flexible Devices for Memories and Computing

    Get PDF
    A recent trend in the development of high mass consumption electron devices is towards electronic textiles (e-textiles), smart wearable devices, smart clothes, and flexible or printable electronics. Intrinsically soft, stretchable, flexible, Wearable Memories and Computing devices (WMCs) bring us closer to sci-fi scenarios, where future electronic systems are totally integrated in our everyday outfits and help us in achieving a higher comfort level, interacting for us with other digital devices such as smartphones and domotics, or with analog devices, such as our brain/peripheral nervous system. WMC will enable each of us to contribute to open and big data systems as individual nodes, providing real-time information about physical and environmental parameters (including air pollution monitoring, sound and light pollution, chemical or radioactive fallout alert, network availability, and so on). Furthermore, WMC could be directly connected to human brain and enable extremely fast operation and unprecedented interface complexity, directly mapping the continuous states available to biological systems. This review focuses on recent advances in nanotechnology and materials science and pays particular attention to any result and promising technology to enable intrinsically soft, stretchable, flexible WMC

    Human activity detection based on mobile devices

    Get PDF
    Aquesta tesi se centra en la detecció d'activitat humana a partir de dispositius mòbils i portàtils. Escollim Hexiwear com el nostre dispositiu portàtil per recollir les dades de l'activitat humana diària, com ara l'acceleració de tres eixos, l'orientació de tres eixos, la velocitat angular i la posició de tres eixos. Aquest projecte consisteix en el desenvolupament d'una aplicació per a telèfon intel·ligent per a l'usuari en l'anàlisi de dades, la visualització de dades i la generació de resultats. L'objectiu és construir un prototip obert i modular que pugui servir d'exemple o plantilla per al desenvolupament d'altres projectes. L'aplicació està desenvolupada amb JAVA per Android Studio. L'aplicació permet a l'usuari connectar-se amb el dispositiu portàtil i reconèixer la seva activitat diària. Per a l'algorisme de classificació de l'activitat diària, hem utilitzat dos mètodes diferents, el primer és mitjançant l'establiment de diferents llindars, el segon és mitjançant l'aprenentatge automàtic. L'aplicació es va provar i els resultats van ser satisfactoris, ja que l'aplicació generada va funcionar correctament. Malgrat les òbvies limitacions, la feina feta és un punt de partida per a desenvolupaments futurs。Esta tesis se centra en la detección de actividad humana basada en dispositivos móviles y portátiles. Elegimos Hexiwear como nuestro dispositivo portátil para recopilar los datos de la actividad humana diaria, como la aceleración de tres ejes, la orientación de tres ejes, la velocidad angular de tres ejes y la posición. Este proyecto implica la creación de una aplicación de teléfono para usuarios de análisis de datos, visualización de datos y generación de resultados. El objetivo es construir un prototipo abierto y modular que pueda servir como ejemplo o plantilla para el desarrollo de otros proyectos. La aplicación está desarrollada usando JAVA por Android Studio. La aplicación permite al usuario conectarse con el dispositivo portátil y reconocer su actividad diaria. Para el algoritmo de clasificación de la actividad diaria, usamos dos métodos diferentes, el primero es establecer umbrales diferentes, el segundo es usar el aprendizaje automático. La aplicación fue probada y los resultados fueron satisfactorios, ya que la aplicación generada funcionó correctamente. A pesar de las limitaciones evidentes, el trabajo realizado es un punto de partida para futuros desarrollos.  This thesis focuses on human activity detection based on mobile and wearable devices. We choose Hexiwear as our wearable device to collect the human daily activity data, like tri-axis acceleration, tri-axis orientation, tri-axis angular velocity and position. This project consists in the development of a smartphone application for the user in data analysis, data visualization and generates results. The objective is to build an open and modular prototype that can serve as an example or template for the development of other projects. The application is developed using JAVA by Android Studio. The application allows the user to connect with the wearable device, and recognize their daily activity. For the daily activity classify algorithm, we used two different methods, the first one is by set different thresholds, the second is by using the machine learning. The application was tested and the results were satisfactory, as the generated application worked properly. Despite the obvious limitations, the work done is a starting point for future developments
    corecore