61 research outputs found

    A sharp interface isogeometric strategy for moving boundary problems

    Get PDF
    The proposed methodology is first utilized to model stationary and propagating cracks. The crack face is enriched with the Heaviside function which captures the displacement discontinuity. Meanwhile, the crack tips are enriched with asymptotic displacement functions to reproduce the tip singularity. The enriching degrees of freedom associated with the crack tips are chosen as stress intensity factors (SIFs) such that these quantities can be directly extracted from the solution without a-posteriori integral calculation. As a second application, the Stefan problem is modeled with a hybrid function/derivative enriched interface. Since the interface geometry is explicitly defined, normals and curvatures can be analytically obtained at any point on the interface, allowing for complex boundary conditions dependent on curvature or normal to be naturally imposed. Thus, the enriched approximation naturally captures the interfacial discontinuity in temperature gradient and enables the imposition of Gibbs-Thomson condition during solidification simulation. The shape optimization through configuration of finite-sized heterogeneities is lastly studied. The optimization relies on the recently derived configurational derivative that describes the sensitivity of an arbitrary objective with respect to arbitrary design modifications of a heterogeneity inserted into a domain. The THB-splines, which serve as the underlying approximation, produce sufficiently smooth solution near the boundaries of the heterogeneity for accurate calculation of the configurational derivatives. (Abstract shortened by ProQuest.

    New strategies for curve and arbitrary-topology surface constructions for design

    Get PDF
    This dissertation presents some novel constructions for curves and surfaces with arbitrary topology in the context of geometric modeling. In particular, it deals mainly with three intimately connected topics that are of interest in both theoretical and applied research: subdivision surfaces, non-uniform local interpolation (in both univariate and bivariate cases), and spaces of generalized splines. Specifically, we describe a strategy for the integration of subdivision surfaces in computer-aided design systems and provide examples to show the effectiveness of its implementation. Moreover, we present a construction of locally supported, non-uniform, piecewise polynomial univariate interpolants of minimum degree with respect to other prescribed design parameters (such as support width, order of continuity and order of approximation). Still in the setting of non-uniform local interpolation, but in the case of surfaces, we devise a novel parameterization strategy that, together with a suitable patching technique, allows us to define composite surfaces that interpolate given arbitrary-topology meshes or curve networks and satisfy both requirements of regularity and aesthetic shape quality usually needed in the CAD modeling framework. Finally, in the context of generalized splines, we propose an approach for the construction of the optimal normalized totally positive (B-spline) basis, acknowledged as the best basis of representation for design purposes, as well as a numerical procedure for checking the existence of such a basis in a given generalized spline space. All the constructions presented here have been devised keeping in mind also the importance of application and implementation, and of the related requirements that numerical procedures must satisfy, in particular in the CAD context

    Large Scale Kernel Methods for Fun and Profit

    Get PDF
    Kernel methods are among the most flexible classes of machine learning models with strong theoretical guarantees. Wide classes of functions can be approximated arbitrarily well with kernels, while fast convergence and learning rates have been formally shown to hold. Exact kernel methods are known to scale poorly with increasing dataset size, and we believe that one of the factors limiting their usage in modern machine learning is the lack of scalable and easy to use algorithms and software. The main goal of this thesis is to study kernel methods from the point of view of efficient learning, with particular emphasis on large-scale data, but also on low-latency training, and user efficiency. We improve the state-of-the-art for scaling kernel solvers to datasets with billions of points using the Falkon algorithm, which combines random projections with fast optimization. Running it on GPUs, we show how to fully utilize available computing power for training kernel machines. To boost the ease-of-use of approximate kernel solvers, we propose an algorithm for automated hyperparameter tuning. By minimizing a penalized loss function, a model can be learned together with its hyperparameters, reducing the time needed for user-driven experimentation. In the setting of multi-class learning, we show that – under stringent but realistic assumptions on the separation between classes – a wide set of algorithms needs much fewer data points than in the more general setting (without assumptions on class separation) to reach the same accuracy. The first part of the thesis develops a framework for efficient and scalable kernel machines. This raises the question of whether our approaches can be used successfully in real-world applications, especially compared to alternatives based on deep learning which are often deemed hard to beat. The second part aims to investigate this question on two main applications, chosen because of the paramount importance of having an efficient algorithm. First, we consider the problem of instance segmentation of images taken from the iCub robot. Here Falkon is used as part of a larger pipeline, but the efficiency afforded by our solver is essential to ensure smooth human-robot interactions. In the second instance, we consider time-series forecasting of wind speed, analysing the relevance of different physical variables on the predictions themselves. We investigate different schemes to adapt i.i.d. learning to the time-series setting. Overall, this work aims to demonstrate, through novel algorithms and examples, that kernel methods are up to computationally demanding tasks, and that there are concrete applications in which their use is warranted and more efficient than that of other, more complex, and less theoretically grounded models
    • …
    corecore