659 research outputs found

    3D locomotion biomimetic robot fish with haptic feedback

    Full text link
    This thesis developed a biomimetic robot fish and built a novel haptic robot fish system based on the kinematic modelling and three-dimentional computational fluid dynamic (CFD) hydrodynamic analysis. The most important contribution is the successful CFD simulation of the robot fish, supporting users in understanding the hydrodynamic properties around it

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    A Prospective Look: Key Enabling Technologies, Applications and Open Research Topics in 6G Networks

    Get PDF
    The fifth generation (5G) mobile networks are envisaged to enable a plethora of breakthrough advancements in wireless technologies, providing support of a diverse set of services over a single platform. While the deployment of 5G systems is scaling up globally, it is time to look ahead for beyond 5G systems. This is driven by the emerging societal trends, calling for fully automated systems and intelligent services supported by extended reality and haptics communications. To accommodate the stringent requirements of their prospective applications, which are data-driven and defined by extremely low-latency, ultra-reliable, fast and seamless wireless connectivity, research initiatives are currently focusing on a progressive roadmap towards the sixth generation (6G) networks. In this article, we shed light on some of the major enabling technologies for 6G, which are expected to revolutionize the fundamental architectures of cellular networks and provide multiple homogeneous artificial intelligence-empowered services, including distributed communications, control, computing, sensing, and energy, from its core to its end nodes. Particularly, this paper aims to answer several 6G framework related questions: What are the driving forces for the development of 6G? How will the enabling technologies of 6G differ from those in 5G? What kind of applications and interactions will they support which would not be supported by 5G? We address these questions by presenting a profound study of the 6G vision and outlining five of its disruptive technologies, i.e., terahertz communications, programmable metasurfaces, drone-based communications, backscatter communications and tactile internet, as well as their potential applications. Then, by leveraging the state-of-the-art literature surveyed for each technology, we discuss their requirements, key challenges, and open research problems

    Augmenting the Spatial Perception Capabilities of Users Who Are Blind

    Get PDF
    People who are blind face a series of challenges and limitations resulting from their lack of being able to see, forcing them to either seek the assistance of a sighted individual or work around the challenge by way of a inefficient adaptation (e.g. following the walls in a room in order to reach a door rather than walking in a straight line to the door). These challenges are directly related to blind users' lack of the spatial perception capabilities normally provided by the human vision system. In order to overcome these spatial perception related challenges, modern technologies can be used to convey spatial perception data through sensory substitution interfaces. This work is the culmination of several projects which address varying spatial perception problems for blind users. First we consider the development of non-visual natural user interfaces for interacting with large displays. This work explores the haptic interaction space in order to find useful and efficient haptic encodings for the spatial layout of items on large displays. Multiple interaction techniques are presented which build on prior research (Folmer et al. 2012), and the efficiency and usability of the most efficient of these encodings is evaluated with blind children. Next we evaluate the use of wearable technology in aiding navigation of blind individuals through large open spaces lacking tactile landmarks used during traditional white cane navigation. We explore the design of a computer vision application with an unobtrusive aural interface to minimize veering of the user while crossing a large open space. Together, these projects represent an exploration into the use of modern technology in augmenting the spatial perception capabilities of blind users

    A prospective look: key enabling technologies, applications and open research topics in 6G networks

    Get PDF
    The fifth generation (5G) mobile networks are envisaged to enable a plethora of breakthrough advancements in wireless technologies, providing support of a diverse set of services over a single platform. While the deployment of 5G systems is scaling up globally, it is time to look ahead for beyond 5G systems. This is mainly driven by the emerging societal trends, calling for fully automated systems and intelligent services supported by extended reality and haptics communications. To accommodate the stringent requirements of their prospective applications, which are data-driven and defined by extremely low-latency, ultra-reliable, fast and seamless wireless connectivity, research initiatives are currently focusing on a progressive roadmap towards the sixth generation (6G) networks, which are expected to bring transformative changes to this premise. In this article, we shed light on some of the major enabling technologies for 6G, which are expected to revolutionize the fundamental architectures of cellular networks and provide multiple homogeneous artificial intelligence-empowered services, including distributed communications, control, computing, sensing, and energy, from its core to its end nodes. In particular, the present paper aims to answer several 6G framework related questions: What are the driving forces for the development of 6G? How will the enabling technologies of 6G differ from those in 5G? What kind of applications and interactions will they support which would not be supported by 5G? We address these questions by presenting a comprehensive study of the 6G vision and outlining seven of its disruptive technologies, i.e., mmWave communications, terahertz communications, optical wireless communications, programmable metasurfaces, drone-based communications, backscatter communications and tactile internet, as well as their potential applications. Then, by leveraging the state-of-the-art literature surveyed for each technology, we discuss the associated requirements, key challenges, and open research problems. These discussions are thereafter used to open up the horizon for future research directions

    Toward hyper-realistic and interactive social VR experiences in live TV scenarios

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Social Virtual Reality (VR) allows multiple distributed users getting together in shared virtual environments to socially interact and/or collaborate. This article explores the applicability and potential of Social VR in the broadcast sector, focusing on a live TV show use case. For such a purpose, a novel and lightweight Social VR platform is introduced. The platform provides three key outstanding features compared to state-of-the-art solutions. First, it allows a real-time integration of remote users in shared virtual environments, using realistic volumetric representations and affordable capturing systems, thus not relying on the use of synthetic avatars. Second, it supports a seamless and rich integration of heterogeneous media formats, including 3D scenarios, dynamic volumetric representation of users and (live/stored) stereoscopic 2D and 180º/360º videos. Third, it enables low-latency interaction between the volumetric users and a video-based presenter (Chroma keying), and a dynamic control of the media playout to adapt to the session’s evolution. The production process of an immersive TV show to be able to evaluate the experience is also described. On the one hand, the results from objective tests show the satisfactory performance of the platform. On the other hand, the promising results from user tests support the potential impact of the presented platform, opening up new opportunities in the broadcast sector, among others.This work has been partially funded by the European Union’s Horizon 2020 program, under agreement nº 762111 (VRTogether project), and partially by ACCIÓ, under agreement COMRDI18-1-0008 (ViVIM project). Work by Mario Montagud has been additionally funded by the Spanish Ministry of Science, Innovation and Universities with a Juan de la Cierva – Incorporación grant (reference IJCI-2017-34611). The authors would also like to thank the EU H2020 VRTogether project consortium for their relevant and valuable contributions.Peer ReviewedPostprint (author's final draft

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Toward New Ecologies of Cyberphysical Representational Forms, Scales, and Modalities

    Get PDF
    Research on tangible user interfaces commonly focuses on tangible interfaces acting alone or in comparison with screen-based multi-touch or graphical interfaces. In contrast, hybrid approaches can be seen as the norm for established mainstream interaction paradigms. This dissertation describes interfaces that support complementary information mediations, representational forms, and scales toward an ecology of systems embodying hybrid interaction modalities. I investigate systems combining tangible and multi-touch, as well as systems combining tangible and virtual reality interaction. For each of them, I describe work focusing on design and fabrication aspects, as well as work focusing on reproducibility, engagement, legibility, and perception aspects
    • …
    corecore