392 research outputs found

    Reconstruction and Synthesis of Human-Scene Interaction

    Get PDF
    In this thesis, we argue that the 3D scene is vital for understanding, reconstructing, and synthesizing human motion. We present several approaches which take the scene into consideration in reconstructing and synthesizing Human-Scene Interaction (HSI). We first observe that state-of-the-art pose estimation methods ignore the 3D scene and hence reconstruct poses that are inconsistent with the scene. We address this by proposing a pose estimation method that takes the 3D scene explicitly into account. We call our method PROX for Proximal Relationships with Object eXclusion. We leverage the data generated using PROX and build a method to automatically place 3D scans of people with clothing in scenes. The core novelty of our method is encoding the proximal relationships between the human and the scene in a novel HSI model, called POSA for Pose with prOximitieS and contActs. POSA is limited to static HSI, however. We propose a real-time method for synthesizing dynamic HSI, which we call SAMP for Scene-Aware Motion Prediction. SAMP enables virtual humans to navigate cluttered indoor scenes and naturally interact with objects. Data-driven kinematic models, like SAMP, can produce high-quality motion when applied in environments similar to those shown in the dataset. However, when applied to new scenarios, kinematic models can struggle to generate realistic behaviors that respect scene constraints. In contrast, we present InterPhys which uses adversarial imitation learning and reinforcement learning to train physically-simulated characters that perform scene interaction tasks in a physical and life-like manner

    Offene-Welt-Strukturen: Architektur, Stadt- und Naturlandschaft im Computerspiel

    Get PDF
    Welche Rolle spielen Algorithmen für den Bildbau und die Darstellung von Welt und Wetter in Computerspielen? Wie beeinflusst die Gestaltung der Räume, Level und Topografien die Entscheidungen und das Verhalten der Spieler_innen? Ist der Brutalismus der erste genuine Architekturstil der Computerspiele? Welche Bedeutung haben Landschaftsgärten und Nationalparks im Strukturieren von Spielwelten? Wie wird Natur in Zeiten des Klimawandels dargestellt? Insbesondere in den letzten 20 Jahren adaptieren digitale Spielwelten akribischer denn je Merkmale der physisch-realen Welt. Durch aufwändige Produktionsverfahren und komplexe Visualisierungsstrategien wird die Angleichung an unsere übrige Alltagswelt stets in Abhängigkeit von Spielmechanik und Weltlichkeit erzeugt. Wie sich spätestens am Beispiel der Open-World-Spiele zeigt, führt die Übernahme bestimmter Weltbilder und Bildtraditionen zu ideologischen Implikationen, die weit über die bisher im Fokus der Forschung stehenden, aus anderen Medienformaten transferierten Erzählkonventionen hinausgehen. Mit seiner Theorie der Architektur als medialem Scharnier legt der Autor offen, dass digitale Spielwelten medienspezifische Eigenschaften aufweisen, die bisher nicht zu greifen waren und der Erforschung harrten. Durch Verschränken von Konzepten aus u.a. Medienwissenschaft, Game Studies, Philosophie, Architekturtheorie, Humangeografie, Landschaftstheorie und Kunstgeschichte erarbeitet Bonner ein transdisziplinäres Theoriemodell und ermöglicht anhand der daraus entwickelten analytischen Methoden erstmals, die komplexe Struktur heutiger Computerspiele - vom Indie Game bis zur AAA Open World - zu verstehen und zu benennen. Mit "Offene-Welt-Strukturen" wird die Architektonik digitaler Spielwelten umfassend zugänglich

    Computational modeling of biological nanopores

    Full text link
    Throughout our history, we, humans, have sought to better control and understand our environment. To this end, we have extended our natural senses with a host of sensors-tools that enable us to detect both the very large, such as the merging of two black holes at a distance of 1.3 billion light-years from Earth, and the very small, such as the identification of individual viral particles from a complex mixture. This dissertation is devoted to studying the physical mechanisms that govern a tiny, yet highly versatile sensor: the biological nanopore. Biological nanopores are protein molecules that form nanometer-sized apertures in lipid membranes. When an individual molecule passes through this aperture (i.e., "translocates"), the temporary disturbance of the ionic current caused by its passage reveals valuable information on its identity and properties. Despite this seemingly straightforward sensing principle, the complexity of the interactions between the nanopore and the translocating molecule implies that it is often very challenging to unambiguously link the changes in the ionic current with the precise physical phenomena that cause them. It is here that the computational methods employed in this dissertation have the potential to shine, as they are capable of modeling nearly all aspects of the sensing process with near atomistic precision. Beyond familiarizing the reader with the concepts and state-of-the-art of the nanopore field, the primary goals of this dissertation are fourfold: (1) Develop methodologies for accurate modeling of biological nanopores; (2) Investigate the equilibrium electrostatics of biological nanopores; (3) Elucidate the trapping behavior of a protein inside a biological nanopore; and (4) Mapping the transport properties of a biological nanopore. In the first results chapter of this thesis (Chapter 3), we used 3D equilibrium simulations [...]Comment: PhD thesis, 306 pages. Source code available at https://github.com/willemsk/phdthesis-tex

    Neural function approximation on graphs: shape modelling, graph discrimination & compression

    Get PDF
    Graphs serve as a versatile mathematical abstraction of real-world phenomena in numerous scientific disciplines. This thesis is part of the Geometric Deep Learning subject area, a family of learning paradigms, that capitalise on the increasing volume of non-Euclidean data so as to solve real-world tasks in a data-driven manner. In particular, we focus on the topic of graph function approximation using neural networks, which lies at the heart of many relevant methods. In the first part of the thesis, we contribute to the understanding and design of Graph Neural Networks (GNNs). Initially, we investigate the problem of learning on signals supported on a fixed graph. We show that treating graph signals as general graph spaces is restrictive and conventional GNNs have limited expressivity. Instead, we expose a more enlightening perspective by drawing parallels between graph signals and signals on Euclidean grids, such as images and audio. Accordingly, we propose a permutation-sensitive GNN based on an operator analogous to shifts in grids and instantiate it on 3D meshes for shape modelling (Spiral Convolutions). Following, we focus on learning on general graph spaces and in particular on functions that are invariant to graph isomorphism. We identify a fundamental trade-off between invariance, expressivity and computational complexity, which we address with a symmetry-breaking mechanism based on substructure encodings (Graph Substructure Networks). Substructures are shown to be a powerful tool that provably improves expressivity while controlling computational complexity, and a useful inductive bias in network science and chemistry. In the second part of the thesis, we discuss the problem of graph compression, where we analyse the information-theoretic principles and the connections with graph generative models. We show that another inevitable trade-off surfaces, now between computational complexity and compression quality, due to graph isomorphism. We propose a substructure-based dictionary coder - Partition and Code (PnC) - with theoretical guarantees that can be adapted to different graph distributions by estimating its parameters from observations. Additionally, contrary to the majority of neural compressors, PnC is parameter and sample efficient and is therefore of wide practical relevance. Finally, within this framework, substructures are further illustrated as a decisive archetype for learning problems on graph spaces.Open Acces

    Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems

    Full text link
    Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences. Today, AI has started to advance natural sciences by improving, accelerating, and enabling our understanding of natural phenomena at a wide range of spatial and temporal scales, giving rise to a new area of research known as AI for science (AI4Science). Being an emerging research paradigm, AI4Science is unique in that it is an enormous and highly interdisciplinary area. Thus, a unified and technical treatment of this field is needed yet challenging. This work aims to provide a technically thorough account of a subarea of AI4Science; namely, AI for quantum, atomistic, and continuum systems. These areas aim at understanding the physical world from the subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales and form an important subarea of AI4Science. A unique advantage of focusing on these areas is that they largely share a common set of challenges, thereby allowing a unified and foundational treatment. A key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods. We provide an in-depth yet intuitive account of techniques to achieve equivariance to symmetry transformations. We also discuss other common technical challenges, including explainability, out-of-distribution generalization, knowledge transfer with foundation and large language models, and uncertainty quantification. To facilitate learning and education, we provide categorized lists of resources that we found to be useful. We strive to be thorough and unified and hope this initial effort may trigger more community interests and efforts to further advance AI4Science

    Artificial Intelligence and Cognitive Computing

    Get PDF
    Artificial intelligence (AI) is a subject garnering increasing attention in both academia and the industry today. The understanding is that AI-enhanced methods and techniques create a variety of opportunities related to improving basic and advanced business functions, including production processes, logistics, financial management and others. As this collection demonstrates, AI-enhanced tools and methods tend to offer more precise results in the fields of engineering, financial accounting, tourism, air-pollution management and many more. The objective of this collection is to bring these topics together to offer the reader a useful primer on how AI-enhanced tools and applications can be of use in today’s world. In the context of the frequently fearful, skeptical and emotion-laden debates on AI and its value added, this volume promotes a positive perspective on AI and its impact on society. AI is a part of a broader ecosystem of sophisticated tools, techniques and technologies, and therefore, it is not immune to developments in that ecosystem. It is thus imperative that inter- and multidisciplinary research on AI and its ecosystem is encouraged. This collection contributes to that

    Towards Fully Dynamic Surface Illumination in Real-Time Rendering using Acceleration Data Structures

    Get PDF
    The improvements in GPU hardware, including hardware-accelerated ray tracing, and the push for fully dynamic realistic-looking video games, has been driving more research in the use of ray tracing in real-time applications. The work described in this thesis covers multiple aspects such as optimisations, adapting existing offline methods to real-time constraints, and adding effects which were hard to simulate without the new hardware, all working towards a fully dynamic surface illumination rendering in real-time.Our first main area of research concerns photon-based techniques, commonly used to render caustics. As many photons can be required for a good coverage of the scene, an efficient approach for detecting which ones contribute to a pixel is essential. We improve that process by adapting and extending an existing acceleration data structure; if performance is paramount, we present an approximation which trades off some quality for a 2–3× improvement in rendering time. The tracing of all the photons, and especially when long paths are needed, had become the highest cost. As most paths do not change from frame to frame, we introduce a validation procedure allowing the reuse of as many as possible, even in the presence of dynamic lights and objects. Previous algorithms for associating pixels and photons do not robustly handle specular materials, so we designed an approach leveraging ray tracing hardware to allow for caustics to be visible in mirrors or behind transparent objects.Our second research focus switches from a light-based perspective to a camera-based one, to improve the picking of light sources when shading: photon-based techniques are wonderful for caustics, but not as efficient for direct lighting estimations. When a scene has thousands of lights, only a handful can be evaluated at any given pixel due to time constraints. Current selection methods in video games are fast but at the cost of introducing bias. By adapting an acceleration data structure from offline rendering that stochastically chooses a light source based on its importance, we provide unbiased direct lighting evaluation at about 30 fps. To support dynamic scenes, we organise it in a two-level system making it possible to only update the parts containing moving lights, and in a more efficient way.We worked on top of the new ray tracing hardware to handle lighting situations that previously proved too challenging, and presented optimisations relevant for future algorithms in that space. These contributions will help in reducing some artistic constraints while designing new virtual scenes for real-time applications

    Neural probabilistic path prediction : skipping paths for acceleration

    Full text link
    La technique de tracé de chemins est la méthode Monte Carlo la plus populaire en infographie pour résoudre le problème de l'illumination globale. Une image produite par tracé de chemins est beaucoup plus photoréaliste que les méthodes standard tel que le rendu par rasterisation et même le lancer de rayons. Mais le tracé de chemins est coûteux et converge lentement, produisant une image bruitée lorsqu'elle n'est pas convergée. De nombreuses méthodes visant à accélérer le tracé de chemins ont été développées, mais chacune présente ses propres défauts et contraintes. Dans les dernières avancées en apprentissage profond, en particulier dans le domaine des modèles génératifs conditionnels, il a été démontré que ces modèles sont capables de bien apprendre, modéliser et tirer des échantillons à partir de distributions complexes. Comme le tracé de chemins dépend également d'un tel processus sur une distribution complexe, nous examinons les similarités entre ces deux problèmes et modélisons le processus de tracé de chemins comme un processus génératif. Ce processus peut ensuite être utilisé pour construire un estimateur efficace avec un réseau neuronal afin d'accélérer le temps de rendu sans trop d'hypothèses sur la scène. Nous montrons que notre estimateur neuronal (NPPP), utilisé avec le tracé de chemins, peut améliorer les temps de rendu d'une manière considérable sans beaucoup compromettre sur la qualité du rendu. Nous montrons également que l'estimateur est très flexible et permet à un utilisateur de contrôler et de prioriser la qualité ou le temps de rendu, sans autre modification ou entraînement du réseau neuronal.Path tracing is one of the most popular Monte Carlo methods used in computer graphics to solve the problem of global illumination. A path traced image is much more photorealistic compared to standard rendering methods such as rasterization and even ray tracing. Unfortunately, path tracing is expensive to compute and slow to converge, resulting in noisy images when unconverged. Many methods aimed to accelerate path tracing have been developed, but each has its own downsides and limitiations. Recent advances in deep learning, especially with conditional generative models, have shown to be very capable at learning, modeling, and sampling from complex distributions. As path tracing is also dependent on sampling from complex distributions, we investigate the similarities between the two problems and model the path tracing process itself as a conditional generative process. It can then be used to build an efficient neural estimator that allows us to accelerate rendering time with as few assumptions as possible. We show that our neural estimator (NPPP) used along with path tracing can improve rendering time by a considerable amount without compromising much in rendering quality. The estimator is also shown to be very flexible and allows a user to control and prioritize quality or rendering time, without any further training or modifications to the neural network

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF
    • …
    corecore