363 research outputs found

    Insights into temperature controls on rockfall occurrence and cliff erosion

    Get PDF
    A variety of environmental triggers have been associated with the occurrence of rockfalls however their role and relative significance remains poorly constrained. This is in part due to the lack of concurrent data on rockfall occurrence and cliff face conditions at temporal resolutions that mirror the variability of environmental conditions, and over durations for large enough numbers of rockfall events to be captured. The aim of this thesis is to fill this data gap, and then to specifically focus on the role of temperature in triggering rockfall that this data illuminates. To achieve this, a long-term multiannual 3D rockfall dataset and contemporaneous Infrared Thermography (IRT) monitoring of cliff surface temperatures has been generated. The approaches used in this thesis are undertaken at East Cliff, Whitby, which is a coastal cliff located in North Yorkshire, UK. The monitored section is ~ 200 m wide and ~65 m high, with a total cliff face area of ~9,592 m². A method for the automated quantification of rockfall volumes is used to explore data collected between 2017–2019 and 2021, with the resulting inventory including > 8,300 rockfalls from 2017–2019 and > 4,100 rockfalls in 2021, totalling > 12,400 number of rockfalls. The analysis of the inventory demonstrates that during dry conditions, increases in rockfall frequency are coincident with diurnal surface temperature fluctuations, notably at sunrise, noon and sunset in all seasons, leading to a marked diurnal pattern of rockfall. Statistically significant relationships are observed to link cliff temperature and rockfall, highlighting the response of rock slopes to absolute temperatures and changes in temperature. This research also shows that inclement weather constitutes the dominant control over the annual production of rockfalls but also quantifies the period when temperature controls are dominant. Temperature-controlled rockfall activity is shown to have an important erosional role, particularly in periods of iterative erosion dominated by small size rockfalls. As such, this thesis provides for the first high-resolution evidence of temperature controls on rockfall activity, cliff erosion and landform development

    Learning-based generative representations for automotive design optimization

    Get PDF
    In automotive design optimizations, engineers intuitively look for suitable representations of CAE models that can be used across different optimization problems. Determining a suitable compact representation of 3D CAE models facilitates faster search and optimization of 3D designs. Therefore, to support novice designers in the automotive design process, we envision a cooperative design system (CDS) which learns the experience embedded in past optimization data and is able to provide assistance to the designer while performing an engineering design optimization task. The research in this thesis addresses different aspects that can be combined to form a CDS framework. First, based on the survey of deep learning techniques, a point cloud variational autoencoder (PC-VAE) is adapted from the literature, extended and evaluated as a shape generative model in design optimizations. The performance of the PC-VAE is verified with respect to state-of-the-art architectures. The PC-VAE is capable of generating a continuous low-dimensional search space for 3D designs, which further supports the generation of novel realistic 3D designs through interpolation and sampling in the latent space. In general, while designing a 3D car design, engineers need to consider multiple structural or functional performance criteria of a 3D design. Hence, in the second step, the latent representations of the PC-VAE are evaluated for generating novel designs satisfying multiple criteria and user preferences. A seeding method is proposed to provide a warm start to the optimization process and improve convergence time. Further, to replace expensive simulations for performance estimation in an optimization task, surrogate models are trained to map each latent representation of an input 3D design to their respective geometric and functional performance measures. However, the performance of the PC-VAE is less consistent due to additional regularization of the latent space. Thirdly, to better understand which distinct region of the input 3D design is learned by a particular latent variable of the PC-VAE, a new deep generative model is proposed (Split-AE), which is an extension of the existing autoencoder architecture. The Split-AE learns input 3D point cloud representations and generates two sets of latent variables for each 3D design. The first set of latent variables, referred to as content, which helps to represent an overall underlying structure of the 3D shape to discriminate across other semantic shape categories. The second set of latent variables refers to the style, which represents the unique shape part of the input 3D shape and this allows grouping of shapes into shape classes. The reconstruction and latent variables disentanglement properties of the Split-AE are compared with other state-of-the-art architectures. In a series of experiments, it is shown that for given input shapes, the Split-AE is capable of generating the content and style variables which gives the flexibility to transfer and combine style features between different shapes. Thus, the Split-AE is able to disentangle features with minimum supervision and helps in generating novel shapes that are modified versions of the existing designs. Lastly, to demonstrate the application of our initial envisioned CDS, two interactive systems were developed to assist designers in exploring design ideas. In the first CDS framework, the latent variables of the PC-VAE are integrated with a graphical user interface. This framework enables the designer to explore designs taking into account the data-driven knowledge and different performance measures of 3D designs. The second interactive system aims to guide the designers to achieve their design targets, for which past human experiences of performing 3D design modifications are captured and learned using a machine learning model. The trained model is then used to guide the (novice) engineers and designers by predicting the next step of design modification based on the current applied changes

    Drift-diffusion models for innovative semiconductor devices and their numerical solution

    Get PDF
    We present charge transport models for novel semiconductor devices which may include ionic species as well as their thermodynamically consistent finite volume discretization

    Neural Reflectance Decomposition

    Get PDF
    Die Erstellung von fotorealistischen Modellen von Objekten aus Bildern oder Bildersammlungen ist eine grundlegende Herausforderung in der Computer Vision und Grafik. Dieses Problem wird auch als inverses Rendering bezeichnet. Eine der größten Herausforderungen bei dieser Aufgabe ist die vielfältige Ambiguität. Der Prozess Bilder aus 3D-Objekten zu erzeugen wird Rendering genannt. Allerdings beeinflussen sich mehrere Eigenschaften wie Form, Beleuchtung und die Reflektivität der Oberfläche gegenseitig. Zusätzlich wird eine Integration dieser Einflüsse durchgeführt, um das endgültige Bild zu erzeugen. Die Umkehrung dieser integrierten Abhängigkeiten ist eine äußerst schwierige und mehrdeutige Aufgabenstellung. Die Lösung dieser Aufgabe ist jedoch von entscheidender Bedeutung, da die automatisierte Erstellung solcher wieder beleuchtbaren Objekte verschiedene Anwendungen in den Bereichen Online-Shopping, Augmented Reality (AR), Virtual Reality (VR), Spiele oder Filme hat. In dieser Arbeit werden zwei Ansätze zur Lösung dieser Aufgabe beschrieben. Erstens wird eine Netzwerkarchitektur vorgestellt, die die Erfassung eines Objekts und dessen Materialien von zwei Aufnahmen ermöglicht. Der Grad der Blicksynthese von diesen Objekten ist jedoch begrenzt, da bei der Dekomposition nur eine einzige Perspektive verwendet wird. Daher wird eine zweite Reihe von Ansätzen vorgeschlagen, bei denen eine Sammlung von 360 Grad verteilten Bildern in die Form, Reflektanz und Beleuchtung gespalten werden. Diese Multi-View-Bilder werden pro Objekt optimiert. Das resultierende Objekt kann direkt in handelsüblicher Rendering-Software oder in Spielen verwendet werden. Wir erreichen dies, indem wir die aktuelle Forschung zu neuronalen Feldern erweitern Reflektanz zu speichern. Durch den Einsatz von Volumen-Rendering-Techniken können wir ein Reflektanzfeld aus natürlichen Bildsammlungen ohne jegliche Ground Truth (GT) Überwachung optimieren. Die von uns vorgeschlagenen Methoden erreichen eine erstklassige Qualität der Dekomposition und ermöglichen neuartige Aufnahmesituationen, in denen sich Objekte unter verschiedenen Beleuchtungsbedingungen oder an verschiedenen Orten befinden können, was üblich für Online-Bildsammlungen ist.Creating relightable objects from images or collections is a fundamental challenge in computer vision and graphics. This problem is also known as inverse rendering. One of the main challenges in this task is the high ambiguity. The creation of images from 3D objects is well defined as rendering. However, multiple properties such as shape, illumination, and surface reflectiveness influence each other. Additionally, an integration of these influences is performed to form the final image. Reversing these integrated dependencies is highly ill-posed and ambiguous. However, solving the task is essential, as automated creation of relightable objects has various applications in online shopping, augmented reality (AR), virtual reality (VR), games, or movies. In this thesis, we propose two approaches to solve this task. First, a network architecture is discussed, which generalizes the decomposition of a two-shot capture of an object from large training datasets. The degree of novel view synthesis is limited as only a singular perspective is used in the decomposition. Therefore, the second set of approaches is proposed, which decomposes a set of 360-degree images. These multi-view images are optimized per object, and the result can be directly used in standard rendering software or games. We achieve this by extending recent research on Neural Fields, which can store information in a 3D neural volume. Leveraging volume rendering techniques, we can optimize a reflectance field from in-the-wild image collections without any ground truth (GT) supervision. Our proposed methods achieve state-of-the-art decomposition quality and enable novel capture setups where objects can be under varying illumination or in different locations, which is typical for online image collections

    optimización da planificación de adquisición de datos LIDAR cara ó modelado 3D de interiores

    Get PDF
    The main objective of this doctoral thesis is the design, validation and implementation of methodologies that allow the geometric and topological modelling of navigable spaces, whether inside buildings or urban environments, to be integrated into three-dimensional geographic information systems (GIS-3D). The input data of this work will consist mainly of point clouds (which can be classified) acquired by LiDAR systems both indoors and outdoors. In addition, the use of BIM infrastructure models and cadastral maps is proposed depending on their availability. Point clouds provide a large amount of environmental information with high accuracy compared to data offered by other acquisition technologies. However, the lack of data structure and volume requires a great deal of processing effort. For this reason, the first step is to structure the data by dividing the input cloud into simpler entities that facilitate subsequent processes. For this first division, the physical elements present in the cloud will be considered, since they can be walls in the case of interior environments or kerbs in the case of exteriors. In order to generate navigation routes adapted to different mobile agents, the next objective will try to establish a semantic subdivision of space according to the functionalities of space. In the case of internal environments, it is possible to use BIM models to evaluate the results and the use of cadastral maps that support the division of the urban environment. Once the navigable space is divided, the design of topologically coherent navigation networks will be parameterized both geometrically and topologically. For this purpose, several spatial discretization techniques, such as 3D tessellations, will be studied to facilitate the establishment of topological relationships, adjacency, connectivity and inclusion between subspaces. Based on the geometric characterization and the topological relations established in the previous phase, the creation of three-dimensional navigation networks with multimodal support will be addressed and different levels of detail will be considered according to the mobility specifications of each agent and its purpose. Finally, the possibility of integrating the networks generated in a GIS-3D visualization system will be considered. For the correct visualization, the level of detail can be adjusted according to geometry and semantics. Aspects such as the type of user or transport, mobility, rights of access to spaces, etc. They must be considered at all times.El objetivo principal de esta tesis doctoral es el diseño, la validación y la implementación de metodologías que permitan el modelado geométrico y topológico de espacios navegables, ya sea de interiores de edificios o entornos urbanos, para integrarse en sistemas de información geográfica tridimensional (SIG). -3D). Los datos de partida de este trabajo consistirán principalmente en nubes de puntos (que pueden estar clasificados) adquiridas por sistemas LiDAR tanto en interiores como en exteriores. Además, se propone el uso de modelos BIM de infraestructuras y mapas catastrales en función de su disponibilidad. Las nubes de puntos proporcionan una gran cantidad de información del entorno con gran precisión con respecto a los datos ofrecidos por otras tecnologías de adquisición. Sin embargo, la falta de estructura de datos y su volumen requiere un gran esfuerzo de procesamiento. Por este motivo, el primer paso que se debe realizar consiste en estructurar los datos dividiendo la nube de entrada en entidades más simples que facilitan los procesos posteriores. Para esta primera división se considerarán los elementos físicos presentes en la nube, ya que pueden ser paredes en el caso de entornos interiores o bordillos en el caso de los exteriores. Con el propósito de generar rutas de navegación adaptadas a diferentes agentes móviles, el próximo objetivo intentará establecer una subdivisión semántica del espacio de acuerdo con las funcionalidades del espacio. En el caso de entornos internos, es posible utilizar modelos BIM para evaluar los resultados y el uso de mapas catastrales que sirven de apoyo en la división del entorno urbano. Una vez que se divide el espacio navegable, se parametrizará tanto geométrica como topológicamente al diseño de redes de navegación topológicamente coherentes. Para este propósito, se estudiarán varias técnicas de discretización espacial, como las teselaciones 3D, para facilitar el establecimiento de relaciones topológicas, la adyacencia, la conectividad y la inclusión entre subespacios. A partir de la caracterización geométrica y las relaciones topológicas establecidas en la fase anterior, se abordará la creación de redes de navegación tridimensionales con soporte multimodal y se considerarán diversos niveles de detalle según las especificaciones de movilidad de cada agente y su propósito. Finalmente, se contemplará la posibilidad de integrar las redes generadas en un sistema de visualización tridimensional 3D SIG 3D. Para la correcta visualización, el nivel de detalle se puede ajustar en función de la geometría y la semántica. Aspectos como el tipo de usuario o transporte, movilidad, derechos de acceso a espacios, etc. Deben ser considerados en todo momento.O obxectivo principal desta tese doutoral é o deseño, validación e implementación de metodoloxías que permitan o modelado xeométrico e topolóxico de espazos navegables, ben sexa de interiores de edificios ou de entornos urbanos, ca fin de seren integrados en Sistemas de Información Xeográfica tridimensionais (SIX-3D). Os datos de partida deste traballo constarán principalmente de nubes de puntos (que poden estar clasificadas) adquiridas por sistemas LiDAR tanto en interiores como en exteriores. Ademáis plantease o uso de modelos BIM de infraestruturas e mapas catastrais dependendo da súa dispoñibilidade. As nubes de puntos proporcionan unha gran cantidade de información do entorno cunha gran precisión respecto os datos que ofrecen outras tecnoloxías de adquisición. Sen embargo, a falta de estrutura dos datos e a seu volume esixe un amplo esforzo de procesado. Por este motivo o primeiro paso a levar a cabo consiste nunha estruturación dos datos mediante a división da nube de entrada en entidades máis sinxelas que faciliten os procesos posteriores. Para esta primeira división consideraranse elementos físicos presentes na nube como poden ser paredes no caso de entornos interiores ou bordillos no caso de exteriores. Coa finalidade de xerar rutas de navegación adaptadas a distintos axentes móbiles, o seguinte obxectivo tratará de establecer unha subdivisión semántica do espazo de acordo as funcionalidades do espazo. No caso de entornos interiores plantease a posibilidade de empregar modelos BIM para avaliar os resultados e o uso de mapas catastrais que sirvan de apoio na división do entorno urbano. Unha vez divido o espazo navigable parametrizarase tanto xeométricamente como topolóxicamene de cara ao deseño de redes de navegación topolóxicamente coherentes. Para este fin estudaranse varias técnicas de discretización de espazos como como son as teselacións 3D co obxectivo de facilitar establecer relacións topolóxicas, de adxacencia, conectividade e inclusión entre subespazos. A partir da caracterización xeométrica e das relación topolóxicas establecidas na fase previa abordarase a creación de redes de navegación tridimensionais con soporte multi-modal e considerando varios niveis de detalle de acordo as especificacións de mobilidade de cada axente e a súa finalidade. Finalmente comtemplarase a posibilidade de integrar as redes xeradas nun sistema SIX 3D visualización tridimensional. Para a correcta visualización o nivel de detalle poderá axustarse en base a xeometría e a semántica. Aspectos como o tipo de usuario ou transporte, mobilidade, dereitos de acceso a espazos, etc. deberán ser considerados en todo momento

    Data Tiling for Sparse Computation

    Get PDF
    Many real-world data contain internal relationships. Efficient analysis of these relationship data is crucial for important problems including genome alignment, network vulnerability analysis, ranking web pages, among others. Such relationship data is frequently sparse and analysis on it is called sparse computation. We demonstrate that the important technique of data tiling is more powerful than previously known by broadening its application space. We focus on three important sparse computation areas: graph analysis, linear algebra, and bioinformatics. We demonstrate data tiling's power by addressing key issues and providing significant improvements---to both runtime and solution quality---in each area. For graph analysis, we focus on fast data tiling techniques that can produce well-structured tiles and demonstrate theoretical hardness results. These tiles are suitable for graph problems as they reduce data movement and ultimately improve end-to-end runtime performance. For linear algebra, we introduce a new cache-aware tiling technique and apply it to the key kernel of sparse matrix by sparse matrix multiplication. This technique tiles the second input matrix and then uses a small, summary matrix to guide access to the tiles during computation. Our approach results in the fastest known implementation across three distinct CPU architectures. In bioinformatics, we develop a tiling based de novo genome assembly pipeline. We start with reads and develop either a graph or hypergraph that captures internal relationships between reads. This is then tiled to minimize connections while maintaining balance. We then treat each resulting tile independently as the input to an existing, shared-memory assembler. Our pipeline improves existing state-of-the-art de novo genome assemblers and brings both runtime and quality improvements to them on both real-world and simulated datasets.Ph.D

    Human History and Digital Future

    Get PDF
    Korrigierter Nachdruck. Im Kapitel "Wallace/Moullou: Viability of Production and Implementation of Retrospective Photogrammetry in Archaeology" wurden die Acknowledgemens enfternt.The Proceedings of the 46th Annual Conference on Computer Applications and Quantitative Methods in Archaeology, held between March 19th and 23th, 2018 at the University of Tübingen, Germany, discuss the current questions concerning digital recording, computer analysis, graphic and 3D visualization, data management and communication in the field of archaeology. Through a selection of diverse case studies from all over the world, the proceedings give an overview on new technical approaches and best practice from various archaeological and computer-science disciplines

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)

    Towards Fully Dynamic Surface Illumination in Real-Time Rendering using Acceleration Data Structures

    Get PDF
    The improvements in GPU hardware, including hardware-accelerated ray tracing, and the push for fully dynamic realistic-looking video games, has been driving more research in the use of ray tracing in real-time applications. The work described in this thesis covers multiple aspects such as optimisations, adapting existing offline methods to real-time constraints, and adding effects which were hard to simulate without the new hardware, all working towards a fully dynamic surface illumination rendering in real-time.Our first main area of research concerns photon-based techniques, commonly used to render caustics. As many photons can be required for a good coverage of the scene, an efficient approach for detecting which ones contribute to a pixel is essential. We improve that process by adapting and extending an existing acceleration data structure; if performance is paramount, we present an approximation which trades off some quality for a 2–3× improvement in rendering time. The tracing of all the photons, and especially when long paths are needed, had become the highest cost. As most paths do not change from frame to frame, we introduce a validation procedure allowing the reuse of as many as possible, even in the presence of dynamic lights and objects. Previous algorithms for associating pixels and photons do not robustly handle specular materials, so we designed an approach leveraging ray tracing hardware to allow for caustics to be visible in mirrors or behind transparent objects.Our second research focus switches from a light-based perspective to a camera-based one, to improve the picking of light sources when shading: photon-based techniques are wonderful for caustics, but not as efficient for direct lighting estimations. When a scene has thousands of lights, only a handful can be evaluated at any given pixel due to time constraints. Current selection methods in video games are fast but at the cost of introducing bias. By adapting an acceleration data structure from offline rendering that stochastically chooses a light source based on its importance, we provide unbiased direct lighting evaluation at about 30 fps. To support dynamic scenes, we organise it in a two-level system making it possible to only update the parts containing moving lights, and in a more efficient way.We worked on top of the new ray tracing hardware to handle lighting situations that previously proved too challenging, and presented optimisations relevant for future algorithms in that space. These contributions will help in reducing some artistic constraints while designing new virtual scenes for real-time applications
    • …
    corecore