225 research outputs found

    Spatially-encoded far-field representations for interactive walkthroughs

    Get PDF

    Digital Learning in the Wild: Re-Imagining New Ruralism, Digital Equity, and Deficit Discourses through the Thirdspace

    Get PDF
    abstract: Digital media is becoming increasingly important to learning in today’s changing times. At the same time, digital technologies and related digital skills are unevenly distributed. Further, deficit-based notions of this digital divide define the public’s educational paradigm. Against this backdrop, I forayed into the social reality of one rural Americana to examine digital learning in the wild. The larger purpose of this dissertation was to spatialize understandings of rural life and pervasive social ills therein, in order to rethink digital equity, such that we dismantle deficit thinking, problematize new ruralism, and re-imagine more just rural geographies. Under a Thirdspace understanding of space as dynamic, relational, and agentive (Soja, 1996), I examined how digital learning is caught up spatially to position the rural struggle over geography amid the ‘Right to the City’ rhetoric (Lefebvre, 1968). In response to this limiting and urban-centric rhetoric, I contest digital inequity as a spatial issue of justice in rural areas. After exploring how digital learning opportunities are distributed at state and local levels, I geo-ethnographically explored digital use to story how families across socio-economic spaces were utilizing digital tools. Last, because ineffective and deficit-based models of understanding erupt from blaming the oppressed for their own self-made oppression, or framing problems (e.g., digital inequity) as solely human-centered, I drew in posthumanist Latourian (2005) social cartographies of Thirdspace. From this, I re-imagined educational equity within rural space to recast digital equity not in terms of the “haves and have nots” but as an account of mutually transformative socio-technical agency. Last, I pay the price of criticism by suggesting possible actions and solutions to the social ills denounced throughout this dissertation.Dissertation/ThesisDoctoral Dissertation Learning, Literacies and Technologies 201

    Spatial Interaction for Immersive Mixed-Reality Visualizations

    Get PDF
    Growing amounts of data, both in personal and professional settings, have caused an increased interest in data visualization and visual analytics. Especially for inherently three-dimensional data, immersive technologies such as virtual and augmented reality and advanced, natural interaction techniques have been shown to facilitate data analysis. Furthermore, in such use cases, the physical environment often plays an important role, both by directly influencing the data and by serving as context for the analysis. Therefore, there has been a trend to bring data visualization into new, immersive environments and to make use of the physical surroundings, leading to a surge in mixed-reality visualization research. One of the resulting challenges, however, is the design of user interaction for these often complex systems. In my thesis, I address this challenge by investigating interaction for immersive mixed-reality visualizations regarding three core research questions: 1) What are promising types of immersive mixed-reality visualizations, and how can advanced interaction concepts be applied to them? 2) How does spatial interaction benefit these visualizations and how should such interactions be designed? 3) How can spatial interaction in these immersive environments be analyzed and evaluated? To address the first question, I examine how various visualizations such as 3D node-link diagrams and volume visualizations can be adapted for immersive mixed-reality settings and how they stand to benefit from advanced interaction concepts. For the second question, I study how spatial interaction in particular can help to explore data in mixed reality. There, I look into spatial device interaction in comparison to touch input, the use of additional mobile devices as input controllers, and the potential of transparent interaction panels. Finally, to address the third question, I present my research on how user interaction in immersive mixed-reality environments can be analyzed directly in the original, real-world locations, and how this can provide new insights. Overall, with my research, I contribute interaction and visualization concepts, software prototypes, and findings from several user studies on how spatial interaction techniques can support the exploration of immersive mixed-reality visualizations.Zunehmende Datenmengen, sowohl im privaten als auch im beruflichen Umfeld, fĂŒhren zu einem zunehmenden Interesse an Datenvisualisierung und visueller Analyse. Insbesondere bei inhĂ€rent dreidimensionalen Daten haben sich immersive Technologien wie Virtual und Augmented Reality sowie moderne, natĂŒrliche Interaktionstechniken als hilfreich fĂŒr die Datenanalyse erwiesen. DarĂŒber hinaus spielt in solchen AnwendungsfĂ€llen die physische Umgebung oft eine wichtige Rolle, da sie sowohl die Daten direkt beeinflusst als auch als Kontext fĂŒr die Analyse dient. Daher gibt es einen Trend, die Datenvisualisierung in neue, immersive Umgebungen zu bringen und die physische Umgebung zu nutzen, was zu einem Anstieg der Forschung im Bereich Mixed-Reality-Visualisierung gefĂŒhrt hat. Eine der daraus resultierenden Herausforderungen ist jedoch die Gestaltung der Benutzerinteraktion fĂŒr diese oft komplexen Systeme. In meiner Dissertation beschĂ€ftige ich mich mit dieser Herausforderung, indem ich die Interaktion fĂŒr immersive Mixed-Reality-Visualisierungen im Hinblick auf drei zentrale Forschungsfragen untersuche: 1) Was sind vielversprechende Arten von immersiven Mixed-Reality-Visualisierungen, und wie können fortschrittliche Interaktionskonzepte auf sie angewendet werden? 2) Wie profitieren diese Visualisierungen von rĂ€umlicher Interaktion und wie sollten solche Interaktionen gestaltet werden? 3) Wie kann rĂ€umliche Interaktion in diesen immersiven Umgebungen analysiert und ausgewertet werden? Um die erste Frage zu beantworten, untersuche ich, wie verschiedene Visualisierungen wie 3D-Node-Link-Diagramme oder Volumenvisualisierungen fĂŒr immersive Mixed-Reality-Umgebungen angepasst werden können und wie sie von fortgeschrittenen Interaktionskonzepten profitieren. FĂŒr die zweite Frage untersuche ich, wie insbesondere die rĂ€umliche Interaktion bei der Exploration von Daten in Mixed Reality helfen kann. Dabei betrachte ich die Interaktion mit rĂ€umlichen GerĂ€ten im Vergleich zur Touch-Eingabe, die Verwendung zusĂ€tzlicher mobiler GerĂ€te als Controller und das Potenzial transparenter Interaktionspanels. Um die dritte Frage zu beantworten, stelle ich schließlich meine Forschung darĂŒber vor, wie Benutzerinteraktion in immersiver Mixed-Reality direkt in der realen Umgebung analysiert werden kann und wie dies neue Erkenntnisse liefern kann. Insgesamt trage ich mit meiner Forschung durch Interaktions- und Visualisierungskonzepte, Software-Prototypen und Ergebnisse aus mehreren Nutzerstudien zu der Frage bei, wie rĂ€umliche Interaktionstechniken die Erkundung von immersiven Mixed-Reality-Visualisierungen unterstĂŒtzen können

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Extracting and Re-rendering Structured Auditory Scenes from Field Recordings

    Get PDF
    International audienceWe present an approach to automatically extract and re-render a structured auditory scene from field recordings obtained with a small set of microphones, freely positioned in the environment. From the recordings and the calibrated position of the microphones, the 3D location of various auditory events can be estimated together with their corresponding content. This structured description is reproduction-setup independent. We propose solutions to classify foreground, well-localized sounds and more diffuse background ambiance and adapt our rendering strategy accordingly. Warping the original recordings during playback allows for simulating smooth changes in the listening point or position of sources. Comparisons to reference binaural and B-format recordings show that our approach achieves good spatial rendering while remaining independent of the reproduction setup and offering extended authoring capabilities

    Scene relighting and editing for improved object insertion

    Get PDF
    Abstract. The goal of this thesis is to develop a scene relighting and object insertion pipeline using Neural Radiance Fields (NeRF) to incorporate one or more objects into an outdoor environment scene. The output is a 3D mesh that embodies decomposed bidirectional reflectance distribution function (BRDF) characteristics, which interact with varying light source positions and strengths. To achieve this objective, the thesis is divided into two sub-tasks. The first sub-task involves extracting visual information about the outdoor environment from a sparse set of corresponding images. A neural representation is constructed, providing a comprehensive understanding of the constituent elements, such as materials, geometry, illumination, and shadows. The second sub-task involves generating a neural representation of the inserted object using either real-world images or synthetic data. To accomplish these objectives, the thesis draws on existing literature in computer vision and computer graphics. Different approaches are assessed to identify their advantages and disadvantages, with detailed descriptions of the chosen techniques provided, highlighting their functioning to produce the ultimate outcome. Overall, this thesis aims to provide a framework for compositing and relighting that is grounded in NeRF and allows for the seamless integration of objects into outdoor environments. The outcome of this work has potential applications in various domains, such as visual effects, gaming, and virtual reality

    Mobile three-dimensional city maps

    Get PDF
    Maps are visual representations of environments and the objects within, depicting their spatial relations. They are mainly used in navigation, where they act as external information sources, supporting observation and decision making processes. Map design, or the art-science of cartography, has led to simplification of the environment, where the naturally three-dimensional environment has been abstracted to a two-dimensional representation, populated with simple geometrical shapes and symbols. However, abstract representation requires a map reading ability. Modern technology has reached the level where maps can be expressed in digital form, having selectable, scalable, browsable and updatable content. Maps may no longer even be limited to two dimensions, nor to an abstract form. When a real world based virtual environment is created, a 3D map is born. Given a realistic representation, would the user no longer need to interpret the map, and be able to navigate in an inherently intuitive manner? To answer this question, one needs a mobile test platform. But can a 3D map, a resource hungry real virtual environment, exist on such resource limited devices? This dissertation approaches the technical challenges posed by mobile 3D maps in a constructive manner, identifying the problems, developing solutions and providing answers by creating a functional system. The case focuses on urban environments. First, optimization methods for rendering large, static 3D city models are researched and a solution provided by combining visibility culling, level-of-detail management and out-of-core rendering, suited for mobile 3D maps. Then, the potential of mobile networking is addressed, developing efficient and scalable methods for progressive content downloading and dynamic entity management. Finally, a 3D navigation interface is developed for mobile devices, and the research validated with measurements and field experiments. It is found that near realistic mobile 3D city maps can exist in current mobile phones, and the rendering rates are excellent in 3D hardware enabled devices. Such 3D maps can also be transferred and rendered on-the-fly sufficiently fast for navigation use over cellular networks. Real world entities such as pedestrians or public transportation can be tracked and presented in a scalable manner. Mobile 3D maps are useful for navigation, but their usability depends highly on interaction methods - the potentially intuitive representation does not imply, for example, faster navigation than with a professional 2D street map. In addition, the physical interface limits the usability

    ReShader: View-Dependent Highlights for Single Image View-Synthesis

    Full text link
    In recent years, novel view synthesis from a single image has seen significant progress thanks to the rapid advancements in 3D scene representation and image inpainting techniques. While the current approaches are able to synthesize geometrically consistent novel views, they often do not handle the view-dependent effects properly. Specifically, the highlights in their synthesized images usually appear to be glued to the surfaces, making the novel views unrealistic. To address this major problem, we make a key observation that the process of synthesizing novel views requires changing the shading of the pixels based on the novel camera, and moving them to appropriate locations. Therefore, we propose to split the view synthesis process into two independent tasks of pixel reshading and relocation. During the reshading process, we take the single image as the input and adjust its shading based on the novel camera. This reshaded image is then used as the input to an existing view synthesis method to relocate the pixels and produce the final novel view image. We propose to use a neural network to perform reshading and generate a large set of synthetic input-reshaded pairs to train our network. We demonstrate that our approach produces plausible novel view images with realistic moving highlights on a variety of real world scenes.Comment: SIGGRAPH Asia 2023. Project page at https://people.engr.tamu.edu/nimak/Papers/SIGAsia2023_Reshader/index.html and video at https://www.youtube.com/watch?v=XW-tl48D3O

    Toward a Scalable Census of Dashboard Designs in the Wild: A Case Study with Tableau Public

    Full text link
    Dashboards remain ubiquitous artifacts for presenting or reasoning with data across different domains. Yet, there has been little work that provides a quantifiable, systematic, and descriptive overview of dashboard designs at scale. We propose a schematic representation of dashboard designs as node-link graphs to better understand their spatial and interactive structures. We apply our approach to a dataset of 25,620 dashboards curated from Tableau Public to provide a descriptive overview of the core building blocks of dashboards in the wild and derive common dashboard design patterns. To guide future research, we make our dashboard corpus publicly available and discuss its application toward the development of dashboard design tools.Comment: *J. Purich and A. Srinivasan contributed equally to the wor
    • 

    corecore