13 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationThe present work focuses on developing a holistic understanding of flow and dispersion in urban environments. Toward this end, ideas are drawn from the fields of physical modeling, inverse modeling, and optimization in urban fluid dynamics. The physical modeling part of the dissertation investigates flow in the vicinity of tall buildings using wind tunnel two-dimensional particle image velocimetry (PIV) measurements. The data obtained have been used to evaluate and improve urban wind and dispersion models. In the inverse modeling part of the dissertation, an event reconstruction tool is developed to quickly and accurately characterize the source parameters of chemical / biological / radiological (CBR) agents released into the atmosphere in an urban domain. Event reconstruction is performed using concentration measurements obtained from a distributed sensor network in the city, where the spatial coordinates of the sensors are known a priori. Source characterization comprises retrieving several source parameters including the spatial coordinates of the source, the source strength, the wind speed, and wind direction at the source, etc. The Gaussian plume model is adopted as the forward model, and derivative-based optimization is chosen to take advantage of its simple analytical nature. The solution technique developed is independent of the forward model used and is comprised of stochastic search with regularized gradient optimization. The final part of the dissertation is comprised of urban form optimization. The problem of identification of urban forms that result in the best environmental conditions is referred to as the urban form optimization problem (UFOP). The decision variables optimized include the spatial locations and the physical dimensions of the buildings and the wind speed and wind direction over the domain of interest. For the UFOP, the quick urban and industrial complex (QUIC) dispersion model is used as the forward model. The UFOP is cast as a single optimization problem, and simulated annealing and genetic algorithms are used in the solution procedure

    Deep learning for characterizing full-color 3D printers: accuracy, robustness, and data-efficiency

    Get PDF
    High-fidelity color and appearance reproduction via multi-material-jetting full-color 3D printing has seen increasing applications, including art and cultural artifacts preservation, product prototypes, game character figurines, stop-motion animated movie, and 3D-printed prostheses such as dental restorations or prosthetic eyes. To achieve high-quality appearance reproduction via full-color 3D printing, a prerequisite is an accurate optical printer model that is a predicting function from an arrangement or ratio of printing materials to the optical/visual properties (e.g. spectral reflectance, color, and translucency) of the resulting print. For appearance 3D printing, the model needs to be inverted to determine the printing material arrangement that reproduces distinct optical/visual properties such as color. Therefore, the accuracy of optical printer models plays a crucial role for the final print quality. The process of fitting an optical printer model's parameters for a printing system is called optical characterization, which requires test prints and optical measurements. The objective of developing a printer model is to maximize prediction performance such as accuracy, while minimizing optical characterization efforts including printing, post-processing, and measuring. In this thesis, I aim at leveraging deep learning to achieve holistically-performant optical printer models, in terms of three different performance aspects of optical printer models: 1) accuracy, 2) robustness, and 3) data efficiency. First, for model accuracy, we propose two deep learning-based printer models that both achieve high accuracies with only a moderate number of required training samples. Experiments show that both models outperform the traditional cellular Neugebauer model by large margins: up to 6 times higher accuracy, or, up to 10 times less data for a similar accuracy. The high accuracy could enhance or even enable color- and translucency-critical applications of 3D printing such as dental restorations or prosthetic eyes. Second, for model robustness, we propose a methodology to induce physically-plausible constraints and smoothness into deep learning-based optical printer models. Experiments show that the model not only almost always corrects implausible relationships between material arrangement and the resulting optical/visual properties, but also ensures significantly smoother predictions. The robustness and smoothness improvements are important to alleviate or avoid unacceptable banding artifacts on textures of the final printouts, particularly for applications where texture details must be preserved, such as for reproducing prosthetic eyes whose texture must match the companion (healthy) eye. Finally, for data efficiency, we propose a learning framework that significantly improves printer models' data efficiency by employing existing characterization data from other printers. We also propose a contrastive learning-based approach to learn dataset embeddings that are extra inputs required by the aforementioned learning framework. Experiments show that the learning framework can drastically reduce the number of required samples for achieving an application-specific prediction accuracy. For some printers, it requires only 10% of the samples to achieve a similar accuracy as the state-of-the-art model. The significant improvement in data efficiency makes it economically possible to frequently characterize 3D printers to achieve more consistent output across different printers over time, which is crucial for color- and translucency-critical individualized mass production. With these proposed deep learning-based methodologies significantly improving the three performance aspects (i.e. accuracy, robustness, and data efficiency), a holistically-performant optical printer model can be achieved, which is particularly important for color- and translucency-critical applications such as dental restorations or prosthetic eyes

    Robust Computer Vision Against Adversarial Examples and Domain Shifts

    Get PDF
    Recent advances in deep learning have achieved remarkable success in various computer vision problems. Driven by progressive computing resources and a vast amount of data, deep learning technology is reshaping human life. However, Deep Neural Networks (DNNs) have been shown vulnerable to adversarial examples, in which carefully crafted perturbations can easily fool DNNs into making wrong predictions. On the other hand, DNNs have poor generalization to domain shifts, as they suffer from performance degradation when encountering data from new visual distributions. We view these issues from the perspective of robustness. More precisely, existing deep learning technology is not reliable enough for many scenarios, where adversarial examples and domain shifts are among the most critical. The lack of reliability inevitably limits DNNs from being deployed in more important computer vision applications, such as self-driving vehicles and medical instruments that have major safety concerns. To overcome these challenges, we focus on investigating and addressing the robustness of deep learning-based computer vision approaches. The first part of this thesis attempts to robustify computer vision models against adversarial examples. We dive into such adversarial robustness from four aspects: novel attacks for strengthening benchmarks, empirical defenses validated by a third-party evaluator, generalizable defenses that can defend against multiple and unforeseen attacks, and defenses specifically designed for less explored tasks. The second part of this thesis improves the robustness against domain shifts via domain adaptation. We dive into two important domain adaptation settings: unsupervised domain adaptation, which is the most common, and source-free domain adaptation, which is more practical in real-world scenarios. The last part explores the intersection of adversarial robustness and domain adaptation fields to provide new insights for robust DNNs. We study two directions: adversarial defense for domain adaptation and adversarial defense via domain adaptations. This dissertation aims at more robust, reliable, and trustworthy computer vision

    Improving Compute & Data Efficiency of Flexible Architectures

    Get PDF

    Sixth Biennial Report : August 2001 - May 2003

    No full text

    Connecting mathematical models for image processing and neural networks

    Get PDF
    This thesis deals with the connections between mathematical models for image processing and deep learning. While data-driven deep learning models such as neural networks are flexible and well performing, they are often used as a black box. This makes it hard to provide theoretical model guarantees and scientific insights. On the other hand, more traditional, model-driven approaches such as diffusion, wavelet shrinkage, and variational models offer a rich set of mathematical foundations. Our goal is to transfer these foundations to neural networks. To this end, we pursue three strategies. First, we design trainable variants of traditional models and reduce their parameter set after training to obtain transparent and adaptive models. Moreover, we investigate the architectural design of numerical solvers for partial differential equations and translate them into building blocks of popular neural network architectures. This yields criteria for stable networks and inspires novel design concepts. Lastly, we present novel hybrid models for inpainting that rely on our theoretical findings. These strategies provide three ways for combining the best of the two worlds of model- and data-driven approaches. Our work contributes to the overarching goal of closing the gap between these worlds that still exists in performance and understanding.Gegenstand dieser Arbeit sind die Zusammenhänge zwischen mathematischen Modellen zur Bildverarbeitung und Deep Learning. Während datengetriebene Modelle des Deep Learning wie z.B. neuronale Netze flexibel sind und gute Ergebnisse liefern, werden sie oft als Black Box eingesetzt. Das macht es schwierig, theoretische Modellgarantien zu liefern und wissenschaftliche Erkenntnisse zu gewinnen. Im Gegensatz dazu bieten traditionellere, modellgetriebene Ansätze wie Diffusion, Wavelet Shrinkage und Variationsansätze eine Fülle von mathematischen Grundlagen. Unser Ziel ist es, diese auf neuronale Netze zu übertragen. Zu diesem Zweck verfolgen wir drei Strategien. Zunächst entwerfen wir trainierbare Varianten von traditionellen Modellen und reduzieren ihren Parametersatz, um transparente und adaptive Modelle zu erhalten. Außerdem untersuchen wir die Architekturen von numerischen Lösern für partielle Differentialgleichungen und übersetzen sie in Bausteine von populären neuronalen Netzwerken. Daraus ergeben sich Kriterien für stabile Netzwerke und neue Designkonzepte. Schließlich präsentieren wir neuartige hybride Modelle für Inpainting, die auf unseren theoretischen Erkenntnissen beruhen. Diese Strategien bieten drei Möglichkeiten, das Beste aus den beiden Welten der modell- und datengetriebenen Ansätzen zu vereinen. Diese Arbeit liefert einen Beitrag zum übergeordneten Ziel, die Lücke zwischen den zwei Welten zu schließen, die noch in Bezug auf Leistung und Modellverständnis besteht.ERC Advanced Grant INCOVI

    Applied Metaheuristic Computing

    Get PDF
    For decades, Applied Metaheuristic Computing (AMC) has been a prevailing optimization technique for tackling perplexing engineering and business problems, such as scheduling, routing, ordering, bin packing, assignment, facility layout planning, among others. This is partly because the classic exact methods are constrained with prior assumptions, and partly due to the heuristics being problem-dependent and lacking generalization. AMC, on the contrary, guides the course of low-level heuristics to search beyond the local optimality, which impairs the capability of traditional computation methods. This topic series has collected quality papers proposing cutting-edge methodology and innovative applications which drive the advances of AMC

    Using MapReduce Streaming for Distributed Life Simulation on the Cloud

    Get PDF
    Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp

    Seventh Biennial Report : June 2003 - March 2005

    No full text
    corecore