922 research outputs found

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    On-the-Go Reflectance Transformation Imaging with Ordinary Smartphones

    Get PDF
    Reflectance Transformation Imaging (RTI) is a popular technique that allows the recovery of per-pixel reflectance information by capturing an object under different light conditions. This can be later used to reveal surface details and interactively relight the subject. Such process, however, typically requires dedicated hardware setups to recover the light direction from multiple locations, making the process tedious when performed outside the lab. We propose a novel RTI method that can be carried out by recording videos with two ordinary smartphones. The flash led-light of one device is used to illuminate the subject while the other captures the reflectance. Since the led is mounted close to the camera lenses, we can infer the light direction for thousands of images by freely moving the illuminating device while observing a fiducial marker surrounding the subject. To deal with such amount of data, we propose a neural relighting model that reconstructs object appearance for arbitrary light directions from extremely compact reflectance distribution data compressed via Principal Components Analysis (PCA). Experiments shows that the proposed technique can be easily performed on the field with a resulting RTI model that can outperform state-of-the-art approaches involving dedicated hardware setups

    Creation of a modular and procedural environments based on current urban architectures

    Get PDF
    This project deals with current modular and procedural technologies and its usage in 3D Art for Environments. The use of tools to create and distribute elements based on a series of parameters is increasingly common for the creation of digital, urban spaces, as it speeds up the creative process and can generate a multitude of unique versions. The goal is to deepen the author’s knowledge on existing city building approaches and replicate the results with a modular and procedural approach using non-standard workflows. The result of this project is an urban space within Unreal Engine, a detailed guide on the approach followed for its creation, and the assets used to construct it

    Role-Playing Reality: Queer Theory, New Materialisms, and Digital Role-Play

    Get PDF
    This thesis works to reconfigure who or what the situated agencies in digital role-play are to realise the more-than-human dimensions and embodiments of play. In doing so, it finds that all the collaborators in digital role-play [players, avatars, interfaces, networks, software, media content, art, performances, gestics, imaginings, alongside other games] disclose the emergent and latent relations and sensations that characterise play. In recognising all these elements as vital and active companions in role-play, this work addresses the question of what the realities of digital role-play are: where realities signify the actualities of what happens when human and nonhuman bodies entangle during play as well as the substances of reality – performance and affect, matter and meaning, space and time – all of which determine role-play. World of Warcraft (Blizzard 2004-) is taken as the primary example in this thesis, though the affordances of its role-players are irradiated alongside other games, art, literature, performances, and materials that likewise ‘play’ with fiction. Alongside these modalities, the Argent Archives, a massive collection of content posted by role-players who play World of Warcraft, evidences the lifeworlds of digital role-play. Since digital role-play is rarely studied, and the Argent Archives never so, this thesis explores foundational questions regarding the realities of play: what they comprise and how players actively create emergent gameworlds with their arts and acts. This thesis employs a methodology of promiscuity, that is, promiscuity as method in order to reckon with the entanglements of play. Inspired by the works of queer theorists and new materialists, which centre bodies, affects, and entanglements, a correspondingly promiscuous methodology follows the labyrinthine folds of encounter that define play while emphasising its intimate, sensual, troubling, and perverse aspects

    Edoardo Benvenuto Prize. Collection of papers

    Get PDF
    The promotion of studies and research on the science and art of building in their historical development constitutes the objective that the Edoardo Benvenuto Association has set itself, since its establishment, in order to honor the memory of Edoardo Benvenuto (1940-1998). The Association in recent years has achieved interesting results by developing various activities such as: organization of national and international meetings, conferences, study days; collaborations with national and foreign research institutions; promotion of the editorial series “Between Mechanics and Architecture"; activation of the portal Bibliotheca Mechanica Architectonica, first “open source” digitized library dedicated to historical research on mechanical and architectural texts. But perhaps the most qualifying initiative was the institution of the Edoardo Benvenuto Prize, arrived in 2019 in its twelfth edition, reserved for young researchers in the field of historical studies on science and the art of building. The awarding of the Prize takes place after an in-depth examination of the texts received by the Association by an international commission of experts. The purpose of this book is to collect and present the most recent studies and publications produced by the winners of the various editions of the Edoardo Benvenuto Prize

    GIS-Based Site Suitability Analysis for Wind and Solar Photovoltaics Energy Plants in Central North Region, Namibia

    Get PDF
    Increasing urbanisation and population growth are making it difficult for governments to achieve sustainable development. Provision of clean energy is among the seventeen sustainable development goals, as it reduces reliance on fossil fuels. In recent years, Namibia has rapidly increased her reliance on sustainable energy. The renewable energy sources (RESs), including wind and solar energy, can be described as clean sources which have lesser negative environmental impact compared to conventional energy sources. Amongst the pressing challenges today is finding solutions on efficient solar and wind energy production. It is imperative to work out the optimum location of RESs before installing them. This can significantly improve performance and establishes the foundation for studying both solar and wind power in a site selection problem. This study aims to determine potential locations for wind and solar photovoltaic (PV) energy plants installation using one of the multi-criteria decision-making (MCDM) methods, the analytical hierarchy process (AHP), and a geographic information system (GIS) within the Central North Regional Electricity Distributor (CENORED) supply area. Combining GIS with MCDM results in a powerful technique for selecting potential sites, since GIS provides effective analysis, manipulation, and visualization of geospatial data, whereas MCDM provides consistent weighing of criteria. In the evaluations of the location: topographical, environmental, climatic and regulations constraints were considered as factors that may facilitate or hinder the deployment of solarwind energy power plants. For solar PV energy plant, the highest potential areas are in the north-west, south-west and study area's southern regions, whereas for the wind power plant, only the northwest part is a highly suitable location for wind energy plants installation. These findings can be used to determine most favourable location of interest for solar PV and wind power plant development or to support the integration of electrical grid expansion and off-grid electrification strategies

    Neural radiance fields for heads: towards accurate digital avatars

    Get PDF
    La digitalització d'éssers humans en entorns 3D ha estat durant dècades objecte d'estudi en la visió per computador i els gràfics digitals, però és encara un problema obert. Avui dia, cap tecnologia és capaç de digitalitzar persones amb una qualitat i dinamisme excel·lent, i que pugui ser utilitzada en motors 3D, com ara un casc de realitat virtual o un telèfon mòbil, en temps real. En aquesta tesi, intentem contribuir a aquest problema explorant com combinar els dos mètodes més usats en els últims anys: \textit{neural radiance fields} i models paramètrics 3D. Intentem dissenyar un model capaç de crear avatars digitals i animables de cares humans a velocitats raonables. La nostra feina se centra principalment a crear un model d'aprenentatge automàtic capaç de generar un avatar facial a partir d'una col·lecció d'imatges i càmeres, però també desenvolupem una eina per integrar l'obtenció d'aquestes dades, de manera que podem provar el nostre mètode en dades reals. A més, també implementem una llibreria per generar dades sintètiques, per tal de controlar els errors que podrien sorgir quan s'obtenen dades reals, per exemple problemes amb la calibració de les càmeres, i facilitar el desenvolupament d'altres projectes relacionats amb humans.La digitalización de seres humanos en entornos 3D ha sido durante décadas objeto de estudio en la visión por computador y los gráficos digitales, pero es aún un problema abierto. Actualmente, ninguna tecnología es capaz de digitalizar personas con una cualidad y dinamismo excelente, y que pueda ser usada en motores 3D, como por ejemplo un casco de realidad virtual o un teléfono móvil, en tiempo real. En esta tesis, intentamos contribuir a este problema explorando como combinar los dos métodos más usados en los últimos años: \textit{neural radiance fields} y modelos paramétricos 3D. Intentamos diseñar un modelo capaz de crear avatares digitales y animables de caras humanas a velocidades razonables. Nuestro trabajo se centra principalmente en crear un modelo de aprendizaje automático capaz de generar un avatar facial a partir de una colección de imágenes y cámaras, pero también desarrollamos una herramienta para obtener estos datos, de manera que podemos probar nuestro método en datos reales. Además, también implementamos una librería para generar datos sintéticos, para poder controlar los errores que pueden surgir al obtener datos reales, como problemas con la calibración de las cámaras, y facilitar el desarrollo de otros proyectos relacionados con humanos.Digitalizing humans in 3D environments has been a subject of study in computer vision and computer graphics for decades, but it still remains an open problem. No current technology can digitalize humans with excellent quality and dynamism that can be used in 3D engines, such as in a virtual reality headset or a mobile phone, at real-time speeds. In this thesis, we aim to contribute to this problem by exploring how to combine the two most commonly used approaches in recent years: neural radiance fields and parametric 3D meshes. We attempt to design a model capable of creating digital, animatable avatars of human faces at reasonable speeds. Our work focuses mostly on creating a machine learning model capable of generating a facial avatar from a set of images and camera poses, but we will also build a pipeline to integrate all steps of obtaining such data, allowing us to demonstrate our method in real-world data. Additionally, we implement a framework to generate synthetic data, in order to alleviate the errors in obtaining real-data, such as problems with camera calibration, and facilitate the development of other human-related projects.Outgoin

    Exploring Virtual Reality and Doppelganger Avatars for the Treatment of Chronic Back Pain

    Get PDF
    Cognitive-behavioral models of chronic pain assume that fear of pain and subsequent avoidance behavior contribute to pain chronicity and the maintenance of chronic pain. In chronic back pain (CBP), avoidance of movements often plays a major role in pain perseverance and interference with daily life activities. In treatment, avoidance is often addressed by teaching patients to reduce pain behaviors and increase healthy behaviors. The current project explored the use of personalized virtual characters (doppelganger avatars) in virtual reality (VR), to influence motor imitation and avoidance, fear of pain and experienced pain in CBP. We developed a method to create virtual doppelgangers, to animate them with movements captured from real-world models, and to present them to participants in an immersive cave virtual environment (CAVE) as autonomous movement models for imitation. Study 1 investigated interactions between model and observer characteristics in imitation behavior of healthy participants. We tested the hypothesis that perceived affiliative characteristics of a virtual model, such as similarity to the observer and likeability, would facilitate observers’ engagement in voluntary motor imitation. In a within-subject design (N=33), participants were exposed to four virtual characters of different degrees of realism and observer similarity, ranging from an abstract stickperson to a personalized doppelganger avatar designed from 3d scans of the observer. The characters performed different trunk movements and participants were asked to imitate these. We defined functional ranges of motion (ROM) for spinal extension (bending backward, BB), lateral flexion (bending sideward, BS) and rotation in the horizontal plane (RH) based on shoulder marker trajectories as behavioral indicators of imitation. Participants’ ratings on perceived avatar appearance were recorded in an Autonomous Avatar Questionnaire (AAQ), based on an explorative factor analysis. Linear mixed effects models revealed that for lateral flexion (BS), a facilitating influence of avatar type on ROM was mediated by perceived identification with the avatar including avatar likeability, avatar-observer-similarity and other affiliative characteristics. These findings suggest that maximizing model-observer similarity may indeed be useful to stimulate observational modeling. Study 2 employed the techniques developed in study 1 with participants who suffered from CBP and extended the setup with real-world elements, creating an immersive mixed reality. The research question was whether virtual doppelgangers could modify motor behaviors, pain expectancy and pain. In a randomized controlled between-subject design, participants observed and imitated an avatar (AVA, N=17) or a videotaped model (VID, N=16) over three sessions, during which the movements BS and RH as well as a new movement (moving a beverage crate) were shown. Again, self-reports and ROMs were used as measures. The AVA group reported reduced avoidance with no significant group differences in ROM. Pain expectancy increased in AVA but not VID over the sessions. Pain and limitations did not significantly differ. We observed a moderation effect of group, with prior pain expectancy predicting pain and avoidance in the VID but not in the AVA group. This can be interpreted as an effect of personalized movement models decoupling pain behavior from movement-related fear and pain expectancy by increasing pain tolerance and task persistence. Our findings suggest that personalized virtual movement models can stimulate observational modeling in general, and that they can increase pain tolerance and persistence in chronic pain conditions. Thus, they may provide a tool for exposure and exercise treatments in cognitive behavioral treatment approaches to CBP
    corecore