49 research outputs found

    Desarrollo de Videojuego de Carreras Utilizando al Smartphone como Timón

    Get PDF
    Uno de los géneros más populares de videojuegos han sido los de carreras. Aquí hay una fuerte demanda por controladores con diseño de timón que hacen de la experiencia de juego más inmersiva y entretenida. No obstante, estos controladores suelen ser caros y no todos los jugadores pueden adquirirlos. En el afán de encontrar una alternativa más accesible que ofrezca la misma inmersión a los jugadores de este género se implementó SmartDrive, un videojuego de carreras para Android programado en Unity 3D. SmartDrive utiliza dos smartphones Android 5.0 o superior conectados vía un hotspot WiFi, uno se usa como pantalla del juego y otro como un timón gracias a sus sensores de movimiento. La validación contó con 30 participantes que jugaron y brindaron sus valoraciones mediante una encuesta. Los resultados finales destacaron la aceptación de SmartDrive en los participantes al ofrecer una experiencia inmersiva con una mecánica de juego novedosa e intuitiva, e implementada con recursos más económicos. Este estudio mostró el potencial de SmartDrive como una alternativa accesible y sin perder el grado de entretenimiento

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Parkinson's Disease Management through ICT

    Get PDF
    Parkinson's Disease (PD) is a neurodegenerative disorder that manifests with motor and non-motor symptoms. PD treatment is symptomatic and tries to alleviate the associated symptoms through an adjustment of the medication. As the disease is evolving and this evolution is patient specific, it could be very difficult to properly manage the disease.The current available technology (electronics, communication, computing, etc.), correctly combined with wearables, can be of great use for obtaining and processing useful information for both clinicians and patients allowing them to become actively involved in their condition.Parkinson's Disease Management through ICT: The REMPARK Approach presents the work done, main results and conclusions of the REMPARK project (2011 – 2015) funded by the European Union under contract FP7-ICT-2011-7-287677. REMPARK system was proposed and developed as a real Personal Health Device for the Remote and Autonomous Management of Parkinson’s Disease, composed of different levels of interaction with the patient, clinician and carers, and integrating a set of interconnected sub-systems: sensor, auditory cueing, Smartphone and server. The sensor subsystem, using embedded algorithmics, is able to detect the motor symptoms associated with PD in real time. This information, sent through the Smartphone to the REMPARK server, is used for an efficient management of the disease

    Parkinson's Disease Management through ICT

    Get PDF
    Parkinson's Disease (PD) is a neurodegenerative disorder that manifests with motor and non-motor symptoms. PD treatment is symptomatic and tries to alleviate the associated symptoms through an adjustment of the medication. As the disease is evolving and this evolution is patient specific, it could be very difficult to properly manage the disease.The current available technology (electronics, communication, computing, etc.), correctly combined with wearables, can be of great use for obtaining and processing useful information for both clinicians and patients allowing them to become actively involved in their condition.Parkinson's Disease Management through ICT: The REMPARK Approach presents the work done, main results and conclusions of the REMPARK project (2011 – 2015) funded by the European Union under contract FP7-ICT-2011-7-287677. REMPARK system was proposed and developed as a real Personal Health Device for the Remote and Autonomous Management of Parkinson’s Disease, composed of different levels of interaction with the patient, clinician and carers, and integrating a set of interconnected sub-systems: sensor, auditory cueing, Smartphone and server. The sensor subsystem, using embedded algorithmics, is able to detect the motor symptoms associated with PD in real time. This information, sent through the Smartphone to the REMPARK server, is used for an efficient management of the disease

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    Convex Interaction : VR o mochiita kōdō asshuku ni yoru kūkanteki intarakushon no kakuchō

    Get PDF

    Parkinson's disease management through ICT: the REMPARK approach

    Get PDF
    Parkinson's Disease (PD) is a neurodegenerative disorder that manifests with motor and non-motor symptoms. PD treatment is symptomatic and tries to alleviate the associated symptoms through an adjustment of the medication. As the disease is evolving and this evolution is patient specific, it could be very difficult to properly manage the disease. The current available technology (electronics, communication, computing, etc.), correctly combined with wearables, can be of great use for obtaining and processing useful information for both clinicians and patients allowing them to become actively involved in their condition. Parkinson's Disease Management through ICT: The REMPARK Approach presents the work done, main results and conclusions of the REMPARK project (2011 - 2015) funded by the European Union under contract FP7-ICT-2011-7-287677. REMPARK system was proposed and developed as a real Personal Health Device for the Remote and Autonomous Management of Parkinson's Disease, composed of different levels of interaction with the patient, clinician and carers, and integrating a set of interconnected sub-systems: sensor, auditory cueing, Smartphone and server. The sensor subsystem, using embedded algorithmics, is able to detect the motor symptoms associated with PD in real time. This information, sent through the Smartphone to the REMPARK server, is used for an efficient management of the disease. Implementation of REMPARK will increase the independence and Quality of Life of patients; and improve their disease management, treatment and rehabilitation.Peer ReviewedPostprint (published version

    Measuring user experience for virtual reality

    Get PDF
    In recent years, Virtual Reality (VR) and 3D User Interfaces (3DUI) have seen a drastic increase in popularity, especially in terms of consumer-ready hardware and software. These technologies have the potential to create new experiences that combine the advantages of reality and virtuality. While the technology for input as well as output devices is market ready, only a few solutions for everyday VR - online shopping, games, or movies - exist, and empirical knowledge about performance and user preferences is lacking. All this makes the development and design of human-centered user interfaces for VR a great challenge. This thesis investigates the evaluation and design of interactive VR experiences. We introduce the Virtual Reality User Experience (VRUX) model based on VR-specific external factors and evaluation metrics such as task performance and user preference. Based on our novel UX evaluation approach, we contribute by exploring the following directions: shopping in virtual environments, as well as text entry and menu control in the context of everyday VR. Along with this, we summarize our findings by design spaces and guidelines for choosing optimal interfaces and controls in VR.In den letzten Jahren haben Virtual Reality (VR) und 3D User Interfaces (3DUI) stark an Popularität gewonnen, insbesondere bei Hard- und Software im Konsumerbereich. Diese Technologien haben das Potenzial, neue Erfahrungen zu schaffen, die die Vorteile von Realität und Virtualität kombinieren. Während die Technologie sowohl für Eingabe- als auch für Ausgabegeräte marktreif ist, existieren nur wenige Lösungen für den Alltag in VR - wie Online-Shopping, Spiele oder Filme - und es fehlt an empirischem Wissen über Leistung und Benutzerpräferenzen. Dies macht die Entwicklung und Gestaltung von benutzerzentrierten Benutzeroberflächen für VR zu einer großen Herausforderung. Diese Arbeit beschäftigt sich mit der Evaluation und Gestaltung von interaktiven VR-Erfahrungen. Es wird das Virtual Reality User Experience (VRUX)- Modell eingeführt, das auf VR-spezifischen externen Faktoren und Bewertungskennzahlen wie Leistung und Benutzerpräferenz basiert. Basierend auf unserem neuartigen UX-Evaluierungsansatz leisten wir einen Beitrag, indem wir folgende interaktive Anwendungsbereiche untersuchen: Einkaufen in virtuellen Umgebungen sowie Texteingabe und Menüsteuerung im Kontext des täglichen VR. Die Ergebnisse werden außerdem mittels Richtlinien zur Auswahl optimaler Schnittstellen in VR zusammengefasst

    Designing Intra-Hand Input for Wearable Devices

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Current trends toward the miniaturization of digital technology have enabled the development of versatile smart wearable devices. Powered by capable processors and equipped with advanced sensors, this novel device category can substantially impact application areas as diverse as education, health care, and entertainment. However, despite their increasing sophistication and potential, input techniques for wearable devices are still relatively immature and often fail to reflect key practical constraints in this design space. For example, on-device touch surfaces, such as the temple touchpad of Google Glass, are typically small and out-of-sight, thus limiting their expressivity capability. Furthermore, input techniques designed specifically for Head-Mounted Displays (HMDs), such as free-hand (e.g., Microsoft Hololens) or dedicated controller (e.g., Oculus VR) tracking, exhibit low levels of social acceptability (e.g., large-scale hand gestures are arguably unsuited for use in public settings) and are vulnerable to cause fatigue (e.g., gorilla arm) in long-term use. Such factors limit their real-world applicability. In addition to these difficulties, typical wearable use scenarios feature various situational impairments, such as encumbered use (e.g., having one hand busy), mobile use (e.g., while walking), and eyes-free use (e.g., while responding to real-world stimuli). These considerations are weakly catered for by the design of current wearable input systems. This dissertation seeks to address these problems by exploring the design space of intra-hand input, which refers to small-scale actions made within a single hand. In particular, through a hand-mounted sensing system, intra-hand input can include diverse input surfaces, such as between fingers (e.g., fingers-to-thumb and thumb-to-fingers inputs) to body surfaces (e.g., hand-to-face inputs). Here, I identify several advantages of this form of hand input, as follows. First, the hand???s high dexterity can enable comfortable, quick, accurate, and expressive inputs of various types (e.g., tap, flick, or swipe touches) at multiple locations (e.g., on each of the five fingers or other body surfaces). In addition, many viable forms of these input movements are small-scale, promising low fatigue over long-term use and basic actions that are discrete and socially acceptable. Finally, intra-hand input is inherently robust to many common situational impairments, such as use that take place in eyes-free, public, or mobile settings. Consolidating these prospective advantages, the general claim of this dissertation is that intra-hand input is an expressive and effective modality for interaction with wearable devices such as HMDs. The dissertation seeks to demonstrate that this claim holds in a range of wearable scenarios and applications, and with measures of both objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability). Specifically, in this dissertation, I verify the referred general claim by demonstrating it in three separate scenarios. I begin by exploring the design space of intra-hand input by studying the specific case of touches to a set of five touch-sensitive five nails. To this end, I first conduct an exploratory design process in which a large set of 144 input actions are generated, followed by two empirical studies on comfort and performance that refine such a large set to 29 viable inputs. The results of this work indicate that nail touches are an accessible, expressive, and comfortable form of input. Based on these results, in the second scenario, I focused on text entry in a mobile setting with the same nail form-factor system. Through a comparative empirical study involving both sitting and mobile conditions, nail-based touches were confirmed to be robust to physical disturbance while mobile. A follow-up word repetition study indicated that text entry studies of up to 33.1 WPM could be achieved when key layouts were appropriately optimized for the nail form factor. These results reveal that intra-hand inputs are suitable for complex input tasks in mobile contexts. In the third scenario, I explored an alternative form of intra-hand input that relies on small-scale hand touches to the face via the lens of social acceptability. This scenario is especially valuable for multi-wearables usage contexts, as single hand-mounted systems can enable input from a proximate distance for each scattered device around the body (e.g., hand-to-face input for smartglass or ear-worn device and inter-finger input with wristwatch usage posture for smartwatch). In fact, making an input on the face can attract unwanted, undue attention from the public. Thus, the design stage of this work involved elicitation of diverse unobtrusive and socially acceptable hand-to-face actions from users, that is, outcomes that were then refined into five design strategies that can achieve socially acceptable input in this setting. Follow-up studies on a prototype that instantiates these strategies validate their effectiveness and provide a characterization of the speed and accuracy achieved by the user with each system. I argue that this spectrum of metrics, recorded over a diverse set of scenarios, supports the general claim that intra-hand inputs for wearable devices can be expressively and effectively operated in terms of objective performance (e.g., time, errors, accuracy) and subjective experience (e.g., comfort or social acceptability) in common wearable use scenarios, such as when mobile and in public. I conclude with a discussion of the contributions of this work, scope for further developments, and the design issues that need to be considered by researchers, designers, and developers who seek to implement these types of input. This discussion spans diverse considerations, such as suitable tracking technologies, appropriate body regions, viable input types, and effective design processes. Through this discussion, this dissertation seeks to provide practical guidance to support and accelerate further research efforts aimed at achieving real-world systems that realize the potential of intra-hand input for wearables.clos
    corecore