317 research outputs found

    Greedy approximation of high-dimensional Ornstein-Uhlenbeck operators with unbounded drift

    Get PDF
    We investigate the convergence of a nonlinear approximation method introduced by Ammar et al. (cf. J. Non-Newtonian Fluid Mech. 139:153--176, 2006) for the numerical solution of high-dimensional Fokker--Planck equations featuring in Navier--Stokes--Fokker--Planck systems that arise in kinetic models of dilute polymers. In the case of Poisson's equation on a rectangular domain in R2\mathbb{R}^2, subject to a homogeneous Dirichlet boundary condition, the mathematical analysis of the algorithm was carried out recently by Le Bris, Leli\`evre and Maday (Const. Approx. 30: 621--651, 2009), by exploiting its connection to greedy algorithms from nonlinear approximation theory explored, for example, by DeVore and Temlyakov (Adv. Comput. Math. 5:173--187, 1996); hence, the variational version of the algorithm, based on the minimization of a sequence of Dirichlet energies, was shown to converge. In this paper, we extend the convergence analysis of the pure greedy and orthogonal greedy algorithms considered by Le Bris, Leli\`evre and Maday to the technically more complicated case where the Laplace operator is replaced by a high-dimensional Ornstein--Uhlenbeck operator with unbounded drift, of the kind that appears in Fokker--Planck equations that arise in bead-spring chain type kinetic polymer models with finitely extensible nonlinear elastic potentials, posed on a high-dimensional Cartesian product configuration space D = D_1 x ... x D_N contained in RNd\mathbb{R}^{N d}, where each set D_i, i=1,...,N, is a bounded open ball in Rd\mathbb{R}^d, d = 2, 3

    Hyperspectral Image Analysis through Unsupervised Deep Learning

    Get PDF
    Hyperspectral image (HSI) analysis has become an active research area in computer vision field with a wide range of applications. However, in order to yield better recognition and analysis results, we need to address two challenging issues of HSI, i.e., the existence of mixed pixels and its significantly low spatial resolution (LR). In this dissertation, spectral unmixing (SU) and hyperspectral image super-resolution (HSI-SR) approaches are developed to address these two issues with advanced deep learning models in an unsupervised fashion. A specific application, anomaly detection, is also studied, to show the importance of SU.Although deep learning has achieved the state-of-the-art performance on supervised problems, its practice on unsupervised problems has not been fully developed. To address the problem of SU, an untied denoising autoencoder is proposed to decompose the HSI into endmembers and abundances with non-negative and abundance sum-to-one constraints. The denoising capacity is incorporated into the network with a sparsity constraint to boost the performance of endmember extraction and abundance estimation.Moreover, the first attempt is made to solve the problem of HSI-SR using an unsupervised encoder-decoder architecture by fusing the LR HSI with the high-resolution multispectral image (MSI). The architecture is composed of two encoder-decoder networks, coupled through a shared decoder, to preserve the rich spectral information from the HSI network. It encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. And the angular difference between representations are minimized to reduce the spectral distortion.Finally, a novel detection algorithm is proposed through spectral unmixing and dictionary based low-rank decomposition, where the dictionary is constructed with mean-shift clustering and the coefficients of the dictionary is encouraged to be low-rank. Experimental evaluations show significant improvement on the performance of anomaly detection conducted on the abundances (through SU).The effectiveness of the proposed approaches has been evaluated thoroughly by extensive experiments, to achieve the state-of-the-art results

    Elastic demand dynamic network user equilibrium: Formulation, existence and computation

    Get PDF
    This paper is concerned with dynamic user equilibrium with elastic travel demand (E-DUE) when the trip demand matrix is determined endogenously. We present an infinite-dimensional variational inequality (VI) formulation that is equivalent to the conditions defining a continuous-time E-DUE problem. An existence result for this VI is established by applying a fixed-point existence theorem (Browder, 1968) in an extended Hilbert space. We present three computational algorithms based on the aforementioned VI and its re-expression as a differential variational inequality (DVI): a projection method, a self-adaptive projection method, and a proximal point method. Rigorous convergence results are provided for these methods, which rely on increasingly relaxed notions of generalized monotonicity, namely mixed strongly-weakly monotonicity for the projection method; pseudomonotonicity for the self-adaptive projection method, and quasimonotonicity for the proximal point method. These three algorithms are tested and their solution quality, convergence, and computational efficiency are compared. Our convergence results, which transcend the transportation applications studied here, apply to a broad family of VIs and DVIs, and are the weakest reported to date

    Sequential Domain Patching for Computationally Feasible Multi-objective Optimization of Expensive Electromagnetic Simulation Models

    Get PDF
    AbstractIn this paper, we discuss a simple and efficient technique for multi-objective design optimization of multi-parameter microwave and antenna structures. Our method exploits a stencil-based approach for identification of the Pareto front that does not rely on population-based metaheuristic algorithms, typically used for this purpose. The optimization procedure is realized in two steps. Initially, the initial Pareto-optimal set representing the best possible trade-offs between conflicting objectives is obtained using low-fidelity representation (coarsely-discretized EM model simulations) of the structure at hand. This is realized by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs identified beforehand. In the second step, the Pareto set is refined to yield the optimal designs at the level of the high-fidelity electromagnetic (EM) model. The appropriate number of patches is determined automatically. The approach is validated by means of two multi-parameter design examples: a compact impedance transformer, and an ultra-wideband monopole antenna. Superiority of the patching method over the state-of-the-art multi-objective optimization techniques is demonstrated in terms of the computational cost of the design process

    Discriminative Block-Diagonal Representation Learning for Image Recognition

    Get PDF
    Existing block-diagonal representation studies mainly focuses on casting block-diagonal regularization on training data, while only little attention is dedicated to concurrently learning both block-diagonal representations of training and test data. In this paper, we propose a discriminative block-diagonal low-rank representation (BDLRR) method for recognition. In particular, the elaborate BDLRR is formulated as a joint optimization problem of shrinking the unfavorable representation from off-block-diagonal elements and strengthening the compact block-diagonal representation under the semisupervised framework of LRR. To this end, we first impose penalty constraints on the negative representation to eliminate the correlation between different classes such that the incoherence criterion of the extra-class representation is boosted. Moreover, a constructed subspace model is developed to enhance the self-expressive power of training samples and further build the representation bridge between the training and test samples, such that the coherence of the learned intraclass representation is consistently heightened. Finally, the resulting optimization problem is solved elegantly by employing an alternative optimization strategy, and a simple recognition algorithm on the learned representation is utilized for final prediction. Extensive experimental results demonstrate that the proposed method achieves superb recognition results on four face image data sets, three character data sets, and the 15 scene multicategories data set. It not only shows superior potential on image recognition but also outperforms the state-of-the-art methods

    Advanced physics-based and data-driven strategies

    Get PDF
    Simulation Based Engineering Science (SBES) has brought major improvements in optimization, control and inverse analysis, all leading to a deeper understanding in many processes occuring in the real world. These noticeable breakthroughts are present in a vast variety of sectors such as aeronautic or automotive industries, mobile telecommunications or healthcare among many other fields. Nevertheless, SBES is currently confronting several difficulties to provide accurate results in complex industrial problems. Apart from the high computational costs associated with industrial applications, the errors introduced by constitutive modeling become more and more important when dealing with new materials. Concurrently, an unceasingly growing interest in concepts such as Big-Data, Machine Learning or Data-Analytics has been experienced. Indeed, this interest is intrinsically motivated by an exhaustive development in both data-acquisition and data-storage systems. For instance, an aircraft may produce over 500 GB of data during a single flight. This panorama brings a perfect opportunity to the so-called Dynamic Data Driven Application Systems (DDDAS), whose main objective is to merge classical simulation algorithms with data coming from experimental measures in a dynamic way. Within this scenario, data and simulations would no longer be uncoupled but rather a symbiosis that is to be exploited would achieve milestones which were inconceivable until these days. Indeed, data will no longer be understood as a static calibration of a given constitutive model but rather the model will be corrected dynamicly as soon as experimental data and simulations tend to diverge. Several numerical algorithms will be presented throughout this manuscript whose main objective is to strengthen the link between data and computational mechanics. The first part of the thesis is mainly focused on parameter identification, data-driven and data completion techniques. The second part is focused on Model Order Reduction (MOR) techniques, since they constitute a fundamental ally to achieve real time constraints arising from DDDAS framework.La Ciencia de la Ingeniería Basada en la Simulación (SBES) ha aportado importantes mejoras en la optimización, el control y el análisis inverso, todo lo cual ha llevado a una comprensión más profunda de muchos de los procesos que ocurren en el mundo real. Estos notables avances están presentes en una gran variedad de sectores como la industria aeronáutica o automotriz, las telecomunicaciones móviles o la salud, entre muchos otros campos. Sin embargo, SBES se enfrenta actualmente a varias dificultades para proporcionar resultados precisos en problemas industriales complejos. Aparte de los altos costes computacionales asociados a las aplicaciones industriales, los errores introducidos por el modelado constitutivo son cada vez más importantes a la hora de tratar con nuevos materiales. Al mismo tiempo, se ha experimentado un interés cada vez mayor en conceptos como Big-Data, Machine Learning o Data-Analytics. Ciertamente, este interés está intrínsecamente motivado por un desarrollo exhaustivo de los sistemas de adquisición y almacenamiento de datos. Por ejemplo, una aeronave puede producir más de 500 GB de datos durante un solo vuelo. Este panorama brinda una oportunidad perfecta a los denominados Sistemas de Aplicación Dinámicos Impulsados por Datos (DDDAS), cuyo objetivo principal es fusionar de forma dinámica los algoritmos clásicos de simulación con los datos procedentes de medidas experimentales. En este escenario, los datos y las simulaciones ya no se desacoplarían, sino que aprovechando una simbiosis se alcanzaría hitos que hasta ahora eran inconcebibles. Mas en detalle, los datos ya no se entenderán como una calibración estática de un modelo constitutivo dado, sino que el modelo se corregirá dinámicamente tan pronto como los datos experimentales y las simulaciones tiendan a diverger. A lo largo de este manuscrito se presentarán varios algoritmos numéricos cuyo objetivo principal es fortalecer el vínculo entre los datos y la mecánica computacional. La primera parte de la tesis se centra principalmente en técnicas de identificación de parámetros, basadas en datos y de compleción de datos. La segunda parte se centra en las técnicas de Reducción de Modelo (MOR), ya que constituyen un aliado fundamental para conseguir las restricciones de tiempo real derivadas del marco DDDAS.Les sciences de l'ingénieur basées sur la simulation (Simulation Based Engineering Science, SBES) ont apporté des améliorations majeures dans l'optimisation, le contrôle et l'analyse inverse, menant toutes à une meilleure compréhension de nombreux processus se produisant dans le monde réel. Ces percées notables sont présentes dans une grande variété de secteurs tels que l'aéronautique ou l'automobile, les télécommunications mobiles ou la santé, entre autres. Néanmoins, les SBES sont actuellement confrontées à plusieurs dificultés pour fournir des résultats précis dans des problèmes industriels complexes. Outre les coûts de calcul élevés associés aux applications industrielles, les erreurs introduites par la modélisation constitutive deviennent de plus en plus importantes lorsqu'il s'agit de nouveaux matériaux. Parallèlement, un intérêt sans cesse croissant pour des concepts tels que les données massives (big data), l'apprentissage machine ou l'analyse de données a été constaté. En effet, cet intérêt est intrinsèquement motivé par un développement exhaustif des systèmes d'acquisition et de stockage de données. Par exemple, un avion peut produire plus de 500 Go de données au cours d'un seul vol. Ce panorama apporte une opportunité parfaite aux systèmes d'application dynamiques pilotés par les données (Dynamic Data Driven Application Systems, DDDAS), dont l'objectif principal est de fusionner de manière dynamique des algorithmes de simulation classiques avec des données provenant de mesures expérimentales. Dans ce scénario, les données et les simulations ne seraient plus découplées, mais une symbiose à exploiter permettrait d'envisager des situations jusqu'alors inconcevables. En effet, les données ne seront plus comprises comme un étalonnage statique d'un modèle constitutif donné mais plutôt comme une correction dynamique du modèle dès que les données expérimentales et les simulations auront tendance à diverger. Plusieurs algorithmes numériques seront présentés tout au long de ce manuscrit dont l'objectif principal est de renforcer le lien entre les données et la mécanique computationnelle. La première partie de la thèse est principalement axée sur l'identification des paramètres, les techniques d'analyse des données et les techniques de complétion de données. La deuxième partie est axée sur les techniques de réduction de modèle (MOR), car elles constituent un allié fondamental pour satisfaire les contraintes temps réel découlant du cadre DDDAS

    1-Bit Matrix Completion

    Full text link
    In this paper we develop a theory of matrix completion for the extreme case of noisy 1-bit observations. Instead of observing a subset of the real-valued entries of a matrix M, we obtain a small number of binary (1-bit) measurements generated according to a probability distribution determined by the real-valued entries of M. The central question we ask is whether or not it is possible to obtain an accurate estimate of M from this data. In general this would seem impossible, but we show that the maximum likelihood estimate under a suitable constraint returns an accurate estimate of M when ||M||_{\infty} <= \alpha, and rank(M) <= r. If the log-likelihood is a concave function (e.g., the logistic or probit observation models), then we can obtain this maximum likelihood estimate by optimizing a convex program. In addition, we also show that if instead of recovering M we simply wish to obtain an estimate of the distribution generating the 1-bit measurements, then we can eliminate the requirement that ||M||_{\infty} <= \alpha. For both cases, we provide lower bounds showing that these estimates are near-optimal. We conclude with a suite of experiments that both verify the implications of our theorems as well as illustrate some of the practical applications of 1-bit matrix completion. In particular, we compare our program to standard matrix completion methods on movie rating data in which users submit ratings from 1 to 5. In order to use our program, we quantize this data to a single bit, but we allow the standard matrix completion program to have access to the original ratings (from 1 to 5). Surprisingly, the approach based on binary data performs significantly better
    • …
    corecore