215 research outputs found

    Statistical Models and Optimization Algorithms for High-Dimensional Computer Vision Problems

    Get PDF
    Data-driven and computational approaches are showing significant promise in solving several challenging problems in various fields such as bioinformatics, finance and many branches of engineering. In this dissertation, we explore the potential of these approaches, specifically statistical data models and optimization algorithms, for solving several challenging problems in computer vision. In doing so, we contribute to the literatures of both statistical data models and computer vision. In the context of statistical data models, we propose principled approaches for solving robust regression problems, both linear and kernel, and missing data matrix factorization problem. In computer vision, we propose statistically optimal and efficient algorithms for solving the remote face recognition and structure from motion (SfM) problems. The goal of robust regression is to estimate the functional relation between two variables from a given data set which might be contaminated with outliers. Under the reasonable assumption that there are fewer outliers than inliers in a data set, we formulate the robust linear regression problem as a sparse learning problem, which can be solved using efficient polynomial-time algorithms. We also provide sufficient conditions under which the proposed algorithms correctly solve the robust regression problem. We then extend our robust formulation to the case of kernel regression, specifically to propose a robust version for relevance vector machine (RVM) regression. Matrix factorization is used for finding a low-dimensional representation for data embedded in a high-dimensional space. Singular value decomposition is the standard algorithm for solving this problem. However, when the matrix has many missing elements this is a hard problem to solve. We formulate the missing data matrix factorization problem as a low-rank semidefinite programming problem (essentially a rank constrained SDP), which allows us to find accurate and efficient solutions for large-scale factorization problems. Face recognition from remotely acquired images is a challenging problem because of variations due to blur and illumination. Using the convolution model for blur, we show that the set of all images obtained by blurring a given image forms a convex set. We then use convex optimization techniques to find the distances between a given blurred (probe) image and the gallery images to find the best match. Further, using a low-dimensional linear subspace model for illumination variations, we extend our theory in a similar fashion to recognize blurred and poorly illuminated faces. Bundle adjustment is the final optimization step of the SfM problem where the goal is to obtain the 3-D structure of the observed scene and the camera parameters from multiple images of the scene. The traditional bundle adjustment algorithm, based on minimizing the l_2 norm of the image re-projection error, has cubic complexity in the number of unknowns. We propose an algorithm, based on minimizing the l_infinity norm of the re-projection error, that has quadratic complexity in the number of unknowns. This is achieved by reducing the large-scale optimization problem into many small scale sub-problems each of which can be solved using second-order cone programming

    ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер

    Get PDF
    For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений

    Regularisierte Optimierungsverfahren für Rekonstruktion und Modellierung in der Computergraphik

    Get PDF
    The field of computer graphics deals with virtual representations of the real world. These can be obtained either through reconstruction of a model from measurements, or by directly modeling a virtual object, often on a real-world example. The former is often formalized as a regularized optimization problem, in which a data term ensures consistency between model and data and a regularization term promotes solutions that have high a priori probability. In this dissertation, different reconstruction problems in computer graphics are shown to be instances of a common class of optimization problems which can be solved using a uniform algorithmic framework. Moreover, it is shown that similar optimization methods can also be used to solve data-based modeling problems, where the amount of information that can be obtained from measurements is insufficient for accurate reconstruction. As real-world examples of reconstruction problems, sparsity and group sparsity methods are presented for radio interferometric image reconstruction in static and time-dependent settings. As a modeling example, analogous approaches are investigated to automatically create volumetric models of astronomical nebulae from single images based on symmetry assumptions.Das Feld der Computergraphik beschäftigt sich mit virtuellen Abbildern der realen Welt. Diese können erlangt werden durch Rekonstruktion eines Modells aus Messdaten, oder durch direkte Modellierung eines virtuellen Objekts, oft nach einem realen Vorbild. Ersteres wird oft als regularisiertes Optimierungsproblem dargestellt, in dem ein Datenterm die Konsistenz zwischen Modell und Daten sicherstellt, während ein Regularisierungsterm Lösungen fördert, die eine hohe A-priori-Wahrscheinlichkeit aufweisen. In dieser Arbeit wird gezeigt, dass verschiedene Rekonstruktionsprobleme der Computergraphik Instanzen einer gemeinsamen Klasse von Optimierungsproblemen sind, die mit einem einheitlichen algorithmischen Framework gelöst werden können. Darüber hinaus wird gezeigt, dass vergleichbare Optimierungsverfahren auch genutzt werden können, um Probleme der datenbasierten Modellierung zu lösen, bei denen die aus Messungen verfügbaren Daten nicht für eine genaue Rekonstruktion ausreichen. Als praxisrelevante Beispiele für Rekonstruktionsprobleme werden Sparsity- und Group-Sparsity-Methoden für die radiointerferometrische Bildrekonstruktion im statischen und zeitabhängigen Fall vorgestellt. Als Beispiel für Modellierung werden analoge Verfahren untersucht, um basierend auf Symmetrieannahmen automatisch volumetrische Modelle astronomischer Nebel aus Einzelbildern zu erzeugen

    Theoretical Optimization of Enzymatic Biomass Processes

    Get PDF
    This dissertation introduces a complete, stochastically-based algorithmic framework Cellulect to study, optimize and predict hydrolysis processes of the structured biomass cellulose. The framework combines a comprehensive geometric model for the cellulosic substrate with microstructured crystalline/amorphous regions distribution, distinctive monomers, polymer chain lengths distribution and free surface area tracking. An efficient tracking algorithm, formulated in a serial fashion, performs the updates of the system. The updates take place reaction-wise. The notion of real time is preserved. Advanced types of enzyme actions (random cuts, reduced/non-reduced end cuts, orientation, and the possibility of a fixed position of active centers) and their modular structure (carbohydrate-binding module with a flexible linker and a catalytic domain) are taken into account within the framework. The concept of state machines is adopted to model enzyme entities. This provides a reliable, powerful and maintainable approach for modelling already known enzyme features and can be extended with additional features not taken into account in the present work. The provided extensive probabilistic catalytic mechanism description further includes adsorption, desorption, competitive inhibition by soluble product polymers, and dynamical bond-breaking reactions with inclusive dependence on monomers and their polymers states within the substrate. All incorporated parameters refer to specific system properties, providing a one to one relationship between degrees of freedom and available features of the model. Finally, time propagation of the system is based on the modified stochastic Gillespie algorithm. It provides an exact stochastic time-reaction propagation algorithm, taking into account the random nature of reaction events as well as its random occurrences. The framework is ready for constrained input parameter estimation with empirical data sets of product concentration profiles by utilizing common optimization routines. Verification of the available data for the most common enzyme kinds (EG, β-G, CBH) in the literature has been accomplished. Sensitivity analysis of estimated model parameters were carried out. Dependency of various experimental input is shown. Optimization behavior in underdetermined conditions is inspected and visualized. Results and predictions for mixtures of optimized enzymes, as well as a practical way to implement and utilize the Cellulect framework are also provided. The obtained results were compared to experimental literature data demonstrate the high flexibility, efficiency and accuracy of the presented framework for the prediction of the cellulose hydrolysis process

    Advanced physics-based and data-driven strategies

    Get PDF
    Simulation Based Engineering Science (SBES) has brought major improvements in optimization, control and inverse analysis, all leading to a deeper understanding in many processes occuring in the real world. These noticeable breakthroughts are present in a vast variety of sectors such as aeronautic or automotive industries, mobile telecommunications or healthcare among many other fields. Nevertheless, SBES is currently confronting several difficulties to provide accurate results in complex industrial problems. Apart from the high computational costs associated with industrial applications, the errors introduced by constitutive modeling become more and more important when dealing with new materials. Concurrently, an unceasingly growing interest in concepts such as Big-Data, Machine Learning or Data-Analytics has been experienced. Indeed, this interest is intrinsically motivated by an exhaustive development in both data-acquisition and data-storage systems. For instance, an aircraft may produce over 500 GB of data during a single flight. This panorama brings a perfect opportunity to the so-called Dynamic Data Driven Application Systems (DDDAS), whose main objective is to merge classical simulation algorithms with data coming from experimental measures in a dynamic way. Within this scenario, data and simulations would no longer be uncoupled but rather a symbiosis that is to be exploited would achieve milestones which were inconceivable until these days. Indeed, data will no longer be understood as a static calibration of a given constitutive model but rather the model will be corrected dynamicly as soon as experimental data and simulations tend to diverge. Several numerical algorithms will be presented throughout this manuscript whose main objective is to strengthen the link between data and computational mechanics. The first part of the thesis is mainly focused on parameter identification, data-driven and data completion techniques. The second part is focused on Model Order Reduction (MOR) techniques, since they constitute a fundamental ally to achieve real time constraints arising from DDDAS framework.La Ciencia de la Ingeniería Basada en la Simulación (SBES) ha aportado importantes mejoras en la optimización, el control y el análisis inverso, todo lo cual ha llevado a una comprensión más profunda de muchos de los procesos que ocurren en el mundo real. Estos notables avances están presentes en una gran variedad de sectores como la industria aeronáutica o automotriz, las telecomunicaciones móviles o la salud, entre muchos otros campos. Sin embargo, SBES se enfrenta actualmente a varias dificultades para proporcionar resultados precisos en problemas industriales complejos. Aparte de los altos costes computacionales asociados a las aplicaciones industriales, los errores introducidos por el modelado constitutivo son cada vez más importantes a la hora de tratar con nuevos materiales. Al mismo tiempo, se ha experimentado un interés cada vez mayor en conceptos como Big-Data, Machine Learning o Data-Analytics. Ciertamente, este interés está intrínsecamente motivado por un desarrollo exhaustivo de los sistemas de adquisición y almacenamiento de datos. Por ejemplo, una aeronave puede producir más de 500 GB de datos durante un solo vuelo. Este panorama brinda una oportunidad perfecta a los denominados Sistemas de Aplicación Dinámicos Impulsados por Datos (DDDAS), cuyo objetivo principal es fusionar de forma dinámica los algoritmos clásicos de simulación con los datos procedentes de medidas experimentales. En este escenario, los datos y las simulaciones ya no se desacoplarían, sino que aprovechando una simbiosis se alcanzaría hitos que hasta ahora eran inconcebibles. Mas en detalle, los datos ya no se entenderán como una calibración estática de un modelo constitutivo dado, sino que el modelo se corregirá dinámicamente tan pronto como los datos experimentales y las simulaciones tiendan a diverger. A lo largo de este manuscrito se presentarán varios algoritmos numéricos cuyo objetivo principal es fortalecer el vínculo entre los datos y la mecánica computacional. La primera parte de la tesis se centra principalmente en técnicas de identificación de parámetros, basadas en datos y de compleción de datos. La segunda parte se centra en las técnicas de Reducción de Modelo (MOR), ya que constituyen un aliado fundamental para conseguir las restricciones de tiempo real derivadas del marco DDDAS.Les sciences de l'ingénieur basées sur la simulation (Simulation Based Engineering Science, SBES) ont apporté des améliorations majeures dans l'optimisation, le contrôle et l'analyse inverse, menant toutes à une meilleure compréhension de nombreux processus se produisant dans le monde réel. Ces percées notables sont présentes dans une grande variété de secteurs tels que l'aéronautique ou l'automobile, les télécommunications mobiles ou la santé, entre autres. Néanmoins, les SBES sont actuellement confrontées à plusieurs dificultés pour fournir des résultats précis dans des problèmes industriels complexes. Outre les coûts de calcul élevés associés aux applications industrielles, les erreurs introduites par la modélisation constitutive deviennent de plus en plus importantes lorsqu'il s'agit de nouveaux matériaux. Parallèlement, un intérêt sans cesse croissant pour des concepts tels que les données massives (big data), l'apprentissage machine ou l'analyse de données a été constaté. En effet, cet intérêt est intrinsèquement motivé par un développement exhaustif des systèmes d'acquisition et de stockage de données. Par exemple, un avion peut produire plus de 500 Go de données au cours d'un seul vol. Ce panorama apporte une opportunité parfaite aux systèmes d'application dynamiques pilotés par les données (Dynamic Data Driven Application Systems, DDDAS), dont l'objectif principal est de fusionner de manière dynamique des algorithmes de simulation classiques avec des données provenant de mesures expérimentales. Dans ce scénario, les données et les simulations ne seraient plus découplées, mais une symbiose à exploiter permettrait d'envisager des situations jusqu'alors inconcevables. En effet, les données ne seront plus comprises comme un étalonnage statique d'un modèle constitutif donné mais plutôt comme une correction dynamique du modèle dès que les données expérimentales et les simulations auront tendance à diverger. Plusieurs algorithmes numériques seront présentés tout au long de ce manuscrit dont l'objectif principal est de renforcer le lien entre les données et la mécanique computationnelle. La première partie de la thèse est principalement axée sur l'identification des paramètres, les techniques d'analyse des données et les techniques de complétion de données. La deuxième partie est axée sur les techniques de réduction de modèle (MOR), car elles constituent un allié fondamental pour satisfaire les contraintes temps réel découlant du cadre DDDAS

    Internationales Kolloquium über Anwendungen der Informatik und Mathematik in Architektur und Bauwesen : 20. bis 22.7. 2015, Bauhaus-Universität Weimar

    Get PDF
    The 20th International Conference on the Applications of Computer Science and Mathematics in Architecture and Civil Engineering will be held at the Bauhaus University Weimar from 20th till 22nd July 2015. Architects, computer scientists, mathematicians, and engineers from all over the world will meet in Weimar for an interdisciplinary exchange of experiences, to report on their results in research, development and practice and to discuss. The conference covers a broad range of research areas: numerical analysis, function theoretic methods, partial differential equations, continuum mechanics, engineering applications, coupled problems, computer sciences, and related topics. Several plenary lectures in aforementioned areas will take place during the conference. We invite architects, engineers, designers, computer scientists, mathematicians, planners, project managers, and software developers from business, science and research to participate in the conference

    Proceedings of the International RILEM Conference Materials, Systems and Structures in Civil Engineering segment on Service Life of Cement-Based Materials and Structures

    Get PDF
    Vol. 1O volume II encontra-se disponível em: http://hdl.handle.net/1822/4390
    corecore