3,147 research outputs found

    CGAMES'2009

    Get PDF

    An original framework for understanding human actions and body language by using deep neural networks

    Get PDF
    The evolution of both fields of Computer Vision (CV) and Artificial Neural Networks (ANNs) has allowed the development of efficient automatic systems for the analysis of people's behaviour. By studying hand movements it is possible to recognize gestures, often used by people to communicate information in a non-verbal way. These gestures can also be used to control or interact with devices without physically touching them. In particular, sign language and semaphoric hand gestures are the two foremost areas of interest due to their importance in Human-Human Communication (HHC) and Human-Computer Interaction (HCI), respectively. While the processing of body movements play a key role in the action recognition and affective computing fields. The former is essential to understand how people act in an environment, while the latter tries to interpret people's emotions based on their poses and movements; both are essential tasks in many computer vision applications, including event recognition, and video surveillance. In this Ph.D. thesis, an original framework for understanding Actions and body language is presented. The framework is composed of three main modules: in the first one, a Long Short Term Memory Recurrent Neural Networks (LSTM-RNNs) based method for the Recognition of Sign Language and Semaphoric Hand Gestures is proposed; the second module presents a solution based on 2D skeleton and two-branch stacked LSTM-RNNs for action recognition in video sequences; finally, in the last module, a solution for basic non-acted emotion recognition by using 3D skeleton and Deep Neural Networks (DNNs) is provided. The performances of RNN-LSTMs are explored in depth, due to their ability to model the long term contextual information of temporal sequences, making them suitable for analysing body movements. All the modules were tested by using challenging datasets, well known in the state of the art, showing remarkable results compared to the current literature methods

    Virtual reality in theatre education and design practice - new developments and applications

    Get PDF
    The global use of Information and Communication Technologies (ICTs) has already established new approaches to theatre education and research, shifting traditional methods of knowledge delivery towards a more visually enhanced experience, which is especially important for teaching scenography. In this paper, I examine the role of multimedia within the field of theatre studies, with particular focus on the theory and practice of theatre design and education. I discuss various IT applications that have transformed the way we experience, learn and co-create our cultural heritage. I explore a suite of rapidly developing communication and computer-visualization techniques that enable reciprocal exchange between students, theatre performances and artefacts. Eventually, I analyse novel technology-mediated teaching techniques that attempt to provide a new media platform for visually enhanced information transfer. My findings indicate that the recent developments in the personalization of knowledge delivery, and also in student-centred study and e-learning, necessitate the transformation of the learners from passive consumers of digital products to active and creative participants in the learning experience

    Application of Neural Radiance Fields (NeRFs) for 3D Model Representation in the Industrial Metaverse

    Get PDF
    [EN] This study explores the utilization of Neural Radiance Fields (NeRFs), with a specific focus on the Instant NeRFs technique. The objective is to represent three-dimensional (3D) models within the context of the industrial metaverse, aiming to achieve a high-fidelity reconstruction of objects in virtual environments. NeRFs, renowned for their innovative approach, enable comprehensive model reconstructions by integrating diverse viewpoints and lighting conditions. The study employs tools such as Unity, Photon Pun2, and Oculus Interaction SDK to develop an immersive metaverse. Within this virtual industrial environment, users encounter numerous interactive six-dimensional (6D) models, fostering active engagement and enriching the overall experience. While initial implementations showcase promising results, they also introduce computational complexities. Nevertheless, this integration forms the basis for immersive comprehension and collaborative interactions within the industrial metaverse. The evolving potential of NeRF technology promises even more exciting prospects in the future.This work has been funded by the Spanish Government (Grant PID2020-117421RB-C21 funded by MCIN/AEI/10.13039/501100011033).Fabra, L.; Solanes, JE.; MuĂąoz GarcĂ­a, A.; MartĂ­ TestĂłn, A.; Alabau, A.; Gracia Calandin, LI. (2024). Application of Neural Radiance Fields (NeRFs) for 3D Model Representation in the Industrial Metaverse. Applied Sciences. 14(5). https://doi.org/10.3390/app1405182514

    Systematic literature review of hand gestures used in human computer interaction interfaces

    Get PDF
    Gestures, widely accepted as a humans' natural mode of interaction with their surroundings, have been considered for use in human-computer based interfaces since the early 1980s. They have been explored and implemented, with a range of success and maturity levels, in a variety of fields, facilitated by a multitude of technologies. Underpinning gesture theory however focuses on gestures performed simultaneously with speech, and majority of gesture based interfaces are supported by other modes of interaction. This article reports the results of a systematic review undertaken to identify characteristics of touchless/in-air hand gestures used in interaction interfaces. 148 articles were reviewed reporting on gesture-based interaction interfaces, identified through searching engineering and science databases (Engineering Village, Pro Quest, Science Direct, Scopus and Web of Science). The goal of the review was to map the field of gesture-based interfaces, investigate the patterns in gesture use, and identify common combinations of gestures for different combinations of applications and technologies. From the review, the community seems disparate with little evidence of building upon prior work and a fundamental framework of gesture-based interaction is not evident. However, the findings can help inform future developments and provide valuable information about the benefits and drawbacks of different approaches. It was further found that the nature and appropriateness of gestures used was not a primary factor in gesture elicitation when designing gesture based systems, and that ease of technology implementation often took precedence

    Discrete event simulation and virtual reality use in industry: new opportunities and future trends

    Get PDF
    This paper reviews the area of combined discrete event simulation (DES) and virtual reality (VR) use within industry. While establishing a state of the art for progress in this area, this paper makes the case for VR DES as the vehicle of choice for complex data analysis through interactive simulation models, highlighting both its advantages and current limitations. This paper reviews active research topics such as VR and DES real-time integration, communication protocols, system design considerations, model validation, and applications of VR and DES. While summarizing future research directions for this technology combination, the case is made for smart factory adoption of VR DES as a new platform for scenario testing and decision making. It is put that in order for VR DES to fully meet the visualization requirements of both Industry 4.0 and Industrial Internet visions of digital manufacturing, further research is required in the areas of lower latency image processing, DES delivery as a service, gesture recognition for VR DES interaction, and linkage of DES to real-time data streams and Big Data sets

    Hand features extractor using hand contour – a case study

    Get PDF
    Hand gesture recognition is an important topic in natural user interfaces (NUI). Hand features extraction is the first step for hand gesture recognition. This work proposes a novel real time method for hand features recognition. In our framework we use three cameras and the hand region is extracted with the background subtraction method. Features like arm angle and fingers positions are calculated using Y variations in the vertical contour image. Wrist detection is obtained by calculating the bigger distance from a base line and the hand contour, giving the main features for the hand gesture recognition. Experiments on our own data-set of about 1800 images show that our method performs well and is highly efficient

    Vision based 3D Gesture Tracking using Augmented Reality and Virtual Reality for Improved Learning Applications

    Get PDF
    3D gesture recognition and tracking based augmented reality and virtual reality have become a big interest of research because of advanced technology in smartphones. By interacting with 3D objects in augmented reality and virtual reality, users get better understanding of the subject matter where there have been requirements of customized hardware support and overall experimental performance needs to be satisfactory. This research investigates currently various vision based 3D gestural architectures for augmented reality and virtual reality. The core goal of this research is to present analysis on methods, frameworks followed by experimental performance on recognition and tracking of hand gestures and interaction with virtual objects in smartphones. This research categorized experimental evaluation for existing methods in three categories, i.e. hardware requirement, documentation before actual experiment and datasets. These categories are expected to ensure robust validation for practical usage of 3D gesture tracking based on augmented reality and virtual reality. Hardware set up includes types of gloves, fingerprint and types of sensors. Documentation includes classroom setup manuals, questionaries, recordings for improvement and stress test application. Last part of experimental section includes usage of various datasets by existing research. The overall comprehensive illustration of various methods, frameworks and experimental aspects can significantly contribute to 3D gesture recognition and tracking based augmented reality and virtual reality.Peer reviewe
    • …
    corecore