598 research outputs found

    Methods for constraint-based conceptual free-form surface design

    Get PDF
    Zusammenfassung Der constraint-basierte Entwurf von Freiformfl„chen ist eine m„chtige Methode im Computer gest�tzten Entwurf. Bekannte Realisierungen beschr„nken sich jedoch meist auf Interpolation von Rand- und isoparametrischen Kurven. In diesem Zusammenhang sind die sog. "Multi-patch" Methoden die am weitesten verbreitete Vorgehensweise. Hier versucht man Fl„chenverb„nde aus einem Netz von dreidimensionalen Kurven (oft gemischt mit unstrukturierten Punktewolken) derart zu generieren, dass die Kurven und Punkte von den Fl„chen interpoliert werden. Die Kurven werden als R„nder von rechteckigen oder dreieckigen bi-polynomialen oder polynomialen Fl„chen betrachtet. Unter dieser Einschr„nkung leidet die Flexibilit„t des Verfahrens. In dieser Dissertation schlagen wir vor, beliebige, d.h. auch nicht iso-parametrische, Kurven zu verwenden. Dadurch ergeben sich folgende Vorteile: Erstens kann so beispielsweise eine B-spline Fl„che entlang einer benutzerdefinierten Kurve verformt werden w„hrend andere Kurven oder Punkte fixiert sind. Zweitens, kann eine B-spline Fl„che Kurven interpolieren, die sich nicht auf iso-parametrische Linien der Fl„che abbilden lassen. Wir behandeln drei Arten von Constraints: Inzidenz einer beliebigen Kurve auf einer B-spline Fl„che, Fixieren von Fl„chennormalen entlang einer beliebigen Kurve (dieser Constraint dient zur Herstellung von tangentialen šberg„ngen zwischen zwei Fl„chen) und die sog. Variational Constrains. Letztere dienen unter anderem zur Optimierung der physikalischen und optischen Eigenschaften der Fl„chen. Es handelt sich hierbei um die Gausschen Normalgleichungen der Fl„chenfunktionale zweiter Ordnung, wie sie in der Literatur bekannt sind. Die Dissertation gliedert sich in zwei Teile. Der erste Teil befasst sich mit der Aufstellung der linearen Gleichungssysteme, welche die oben erw„hnten Constraints repr„sentieren. Der zweite Teil behandelt Methoden zum L”sen dieser Gleichungssysteme. Der Kern des ersten Teiles ist die Erweiterung und Generalisierung des auf Polarformen (Blossoms) basierenden Algorithmus f�r Verkettung von Polynomen auf Bezier und B-spline Basis: Gegeben sei eine B-spline Fl„che und eine B-spline Kurve im Parameterraum der Fl„che. Wir zeigen, dass die Kontrollpunkte der dreidimensionalen Fl„chenkurve, welche als polynomiale Verkettung der beiden definiert ist, durch eine im Voraus berechenbare lineare Tranformation (eine Matrix) der Fl„chenkontrollpunkte ausgedr�ckt werden k”nnen. Dadurch k”nnen Inzidenzbeziehungen zwischen Kurven und Fl„chen exakt und auf eine sehr elegante und kompakte Art definiert werden. Im Vergleich zu den bekannten Methoden ist diese Vorgehensweise effizienter, numerisch stabiler und erh”ht nicht die Konditionszahl der zu l”senden linearen Gleichungen. Die Effizienz wird erreicht durch Verwendung von eigens daf�r entwickelten Datenstrukturen und sorgf„ltige Analyse von kombinatorischen Eigenschaften von Polarformen. Die Gleichungen zur Definition von Tangentialit„ts- und Variational Constraints werden als Anwendung und Erweiterung dieses Algorithmus implementiert. Beschrieben werden auch symbolische und numerische Operationen auf B-spline Polynomen (Multiplikation, Differenzierung, Integration). Dabei wird konsistent die Matrixdarstellung von B-spline Polynomen verwendet. Das L”sen dieser Art von Constraintproblemen bedeutet das Finden der Kontrollpunkte einer B-spline Fl„che derart, dass die definierten Bedingungen erf�llt werden. Dies wird durch L”sen von, im Allgemeinen, unterbestimmten und schlecht konditionierten linearen Gleichungssystemen bewerkstelligt. Da in solchen F„llen keine eindeutige, numerisch stabile L”sung existiert, f�hren die �blichen Methoden zum L”sen von linearen Gleichungssystemen nicht zum Erfolg. Wir greifen auf die Anwendung von sog. Regularisierungsmethoden zur�ck, die auf der Singul„rwertzerlegung (SVD) der Systemmatrix beruhen. Insbesondere wird die L-curve eingesetzt, ein "numerischer Hochfrequenzfilter", der uns in die Lage versetzt eine stabile L”sung zu berechnen. Allerdings reichen auch diese Methoden im Allgemeinen nicht aus, eine Fl„che zu generieren, welche die erw�nschten „sthetischen und physikalischen Eigenschaften besitzt. Verformt man eine Tensorproduktfl„che entlang einer nicht isoparametrischen Kurve, entstehen unerw�nschte Oszillationen und Verformungen. Dieser Effekt wird "Surface-Aliasing" genannt. Wir stellen zwei Methoden vor um diese Aliasing-Effekte zu beseitigen: Die erste Methode wird vorzugsweise f�r Deformationen einer existierenden B-spline Fl„che entlang einer nicht isoparametrischen Kurve angewendet. Es erfogt eine Umparametrisierung der zu verformenden Fl„che derart, dass die Kurve in der neuen Fl„che auf eine isoparametrische Linie abgebildet wird. Die Umparametrisierung einer B- spline Fl„che ist keine abgeschlossene Operation; die resultierende Fl„che besitzt i.A. keine B-spline Darstellung. Wir berechnen eine beliebig genaue Approximation der resultierenden Fl„che mittels Interpolation von Kurvennetzen, die von der umzuparametrisierenden Fl„che gewonnen werden. Die zweite Methode ist rein algebraisch: Es werden zus„tzliche Bedingungen an die L”sung des Gleichungssystems gestellt, die die Aliasing-Effekte unterdr�cken oder ganz beseitigen. Es wird ein restriktionsgebundenes Minimum einer Zielfunktion gesucht, deren globales Minimum bei "optimaler" Form der Fl„che eingenommen wird. Als Zielfunktionen werden Gl„ttungsfunktionale zweiter Ordnung eingesetzt. Die stabile L”sung eines solchen Optimierungsproblems kann aufgrund der nahezu linearen Abh„ngigkeit des Gleichungen nur mit Hilfe von Regularisierungsmethoden gewonnen werden, welche die vorgegebene Zielfunktion ber�cksichtigen. Wir wenden die sog. Modifizierte Singul„rwertzerlegung in Verbindung mit dem L-curve Filter an. Dieser Algorithmus minimiert den Fehler f�r die geometrischen Constraints so, dass die L”sung gleichzeitig m”glichst nah dem Optimum der Zielfunktion ist.The constrained-based design of free-form surfaces is currently limited to tensor-product interpolation of orthogonal curve networks or equally spaced grids of points. The, so- called, multi-patch methods applied mainly in the context of scattered data interpolation construct surfaces from given boundary curves and derivatives along them. The limitation to boundary curves or iso-parametric curves considerably lowers the flexibility of this approach. In this thesis, we propose to compute surfaces from arbitrary (that is, not only iso-parametric) curves. This allows us to deform a B-spline surface along an arbitrary user-defined curve, or, to interpolate a B-spline surface through a set of curves which cannot be mapped to iso-parametric lines of the surface. We consider three kinds of constraints: the incidence of a curve on a B-spline surface, prescribed surface normals along an arbitrary curve incident on a surface and the, so-called, variational constraints which enforce a physically and optically advantageous shape of the computed surfaces. The thesis is divided into two parts: in the first part, we describe efficient methods to set up the equations for above mentioned linear constraints between curves and surfaces. In the second part, we discuss methods for solving such constraints. The core of the first part is the extension and generalization of the blossom-based polynomial composition algorithm for B-splines: let be given a B-spline surface and a B-spline curve in the domain of that surface. We compute a matrix which represents a linear transformation of the surface control points such that after the transformation we obtain the control points of the curve representing the polynomial composition of the domain curve and the surface. The result is a 3D B-spline curve always exactly incident on the surface. This, so-called, composition matrix represents a set of linear curve-surface incidence constraints. Compared to methods used previously our approach is more efficient, numerically more stable and does not unnecessarily increase the condition number of the matrix. The thesis includes a careful analysis of the complexity and combinatorial properties of the algorithm. We also discuss topics regarding algebraic operations on B-spline polynomials (multiplication, differentiation, integration). The matrix representation of B-spline polynomials is used throughout the thesis. We show that the equations for tangency and variational constraints are easily obtained re-using the methods elaborated for incidence constraints. The solving of generalized curve-surface constraints means to find the control points of the unknown surface given one or several curves incident on that surface. This is accomplished by solving of large and, generally, under-determined and badly conditioned linear systems of equations. In such cases, no unique and numerically stable solution exists. Hence, the usual methods such as Gaussian elimination or QR-decomposition cannot be applied in straightforward manner. We propose to use regularization methods based on Singular Value Decomposition (SVD). We apply the so-called L-curve, which can be seen as an numerical high-frequency filter. The filter automatically singles out a stable solution such that best possible satisfaction of defined constraints is achieved. However, even the SVD along with the L-curve filter cannot be applied blindly: it turns out that it is not sufficient to require only algebraic stability of the solution. Tensor-product surfaces deformed along arbitrary incident curves exhibit unwanted deformations due to the rectangular structure of the model space. We discuss a geometric and an algebraic method to remove this, so-called, Surface aliasing effect. The first method reparametrizes the surface such that a general curve constraint is converted to iso-parametric curve constraint which can be easily solved by standard linear algebra methods without aliasing. The reparametrized surface is computed by means of the approximated surface-surface composition algorithm, which is also introduced in this thesis. While this is not possible symbolically, an arbitrary accurate approximation of the resulting surface is obtained using constrained curve network interpolation. The second method states additional constraints which suppress or completely remove the aliasing. Formally we solve a constrained least square approximation problem: we minimize an surface objective function subject to defined curve constraints. The objective function is chosen such that it takes in the minimal value if the surface has optimal shape; we use a linear combination of second order surface smoothing functionals. When solving such problems we have to deal with nearly linearly dependent equations. Problems of this type are called ill-posed. Therefore sophisticated numerical methods have to be applied in order to obtain a set of degrees of freedom (control points of the surface) which are sufficient to satisfy given constraints. The remaining unused degrees of freedom are used to enforce an optically pleasing shape of the surface. We apply the Modified Truncated SVD (MTSVD) algorithm in connection with the L-curve filter which determines a compromise between an optically pleasant shape of the surface and constraint satisfaction in a particularly efficient manner

    New Results in ell_1 Penalized Regression

    Get PDF
    Here we consider penalized regression methods, and extend on the results surrounding the l1 norm penalty. We address a more recent development that generalizes previous methods by penalizing a linear transformation of the coefficients of interest instead of penalizing just the coefficients themselves. We introduce an approximate algorithm to fit this generalization and a fully Bayesian hierarchical model that is a direct analogue of the frequentist version. A number of benefits are derived from the Bayesian persepective; most notably choice of the tuning parameter and natural means to estimate the variation of estimates – a notoriously difficult task for the frequentist formulation. We then introduce Bayesian trend filtering which exemplifies the benefits of our Bayesian version. Bayesian trend filtering is shown to be an empirically strong technique for fitting univariate, nonparametric regression. Through a simulation study, we show that Bayesian trend filtering reduces prediction error and attains more accurate coverage probabilities over the frequentist method. We then apply Bayesian trend filtering to real data sets, where our method is quite competitive against a number of other popular nonparametric methods

    Assisting digital volume correlation with mechanical image-based modeling: application to the measurement of kinematic fields at the architecture scale in cellular materials

    Get PDF
    La mesure de champs de déplacement et de déformation aux petites échelles dans des microstructures complexes représente encore un défi majeur dans le monde de la mécanique expérimentale. Ceci est en partie dû aux acquisitions d'images et à la pauvreté de la texture à ces échelles. C'est notamment le cas pour les matériaux cellulaires lorsqu'ils sont imagés avec des micro-tomographes conventionnels et qu'ils peuvent être sujets à des mécanismes de déformation complexes. Comme la validation de modèles numériques et l'identification des propriétés mécaniques de matériaux se base sur des mesures précises de déplacements et de déformations, la conception et l'implémentation d'algorithmes robustes et fiables de corrélation d'images semble nécessaire. Lorsque l'on s'intéresse à l'utilisation de la corrélation d'images volumiques (DVC) pour les matériaux cellulaires, on est confronté à un paradoxe: l'absence de texture à l'échelle du constituant conduit à considérer l'architecture comme marqueur pour la corrélation. Ceci conduit à l'échec des techniques ordinaires de DVC à mesurer des cinématiques aux échelles subcellulaires en lien avec des comportements mécaniques locaux complexes tels que la flexion ou le flambement de travées. L'objectif de cette thèse est la conception d'une technique de DVC pour la mesure de champs de déplacement dans des matériaux cellulaires à l'échelle de leurs architectures. Cette technique assiste la corrélation d'images par une régularisation élastique faible en utilisant un modèle mécanique généré automatiquement et basé sur les images. La méthode suggérée introduit une séparation d'échelles au dessus desquelles la DVC est dominante et en dessous desquelles elle est assistée par le modèle mécanique basé sur l'image. Une première étude numérique consistant à comparer différentes techniques de construction de modèles mécaniques basés sur les images est conduite. L'accent est mis sur deux méthodes de calcul particulières: la méthode des éléments finis (FEM) et la méthode des cellules finies (FCM) qui consiste à immerger la géométrie complexe dans une grille régulière de haut ordre sans utiliser de mailleurs. Si la FCM évite une première phase délicate de discrétisation, plusieurs paramètres restent néanmoins délicats à fixer. Dans ce travail, ces paramètres sont ajustés afin d'obtenir (a) la meilleure précision (bornée par les erreurs de pixellisation) tout en (b) assurant une complexité minimale. Pour l'aspect mesure par corrélation d'images régularisée, plusieurs expérimentations virtuelles à partir de différentes simulations numériques (en élasticité, en plasticité et en non-linéarité géométrique) sont d'abord réalisées afin d'analyser l'influence des paramètres de régularisation introduits. Les erreurs de mesures peuvent dans ce cas être quantifiées à l'aide des solutions de référence éléments finis. La capacité de la méthode à mesurer des cinématiques complexes en absence de texture est démontrée pour des régimes non-linéaires tels que le flambement. Finalement, le travail proposé est généralisé à la corrélation volumique des différents états de déformation du matériau et à la construction automatique de la micro-architecture cellulaire en utilisant soit une grille B-spline d'ordre arbitraire (FCM) soit un maillage éléments finis (FEM). Une mise en évidence expérimentale de l'efficacité et de la justesse de l'approche proposée est effectuée à travers de la mesure de cinématiques complexes dans une mousse polyuréthane sollicitée en compression lors d'un essai in situ.Measuring displacement and strain fields at low observable scales in complex microstructures still remains a challenge in experimental mechanics often because of the combination of low definition images with poor texture at this scale. The problem is particularly acute in the case of cellular materials, when imaged by conventional micro-tomographs, for which complex highly non-linear local phenomena can occur. As the validation of numerical models and the identification of mechanical properties of materials must rely on accurate measurements of displacement and strain fields, the design and implementation of robust and faithful image correlation algorithms must be conducted. With cellular materials, the use of digital volume correlation (DVC) faces a paradox: in the absence of markings of exploitable texture on/or in the struts or cell walls, the available speckle will be formed by the material architecture itself. This leads to the inability of classical DVC codes to measure kinematics at the cellular and a fortiori sub-cellular scales, precisely because the interpolation basis of the displacement field cannot account for the complexity of the underlying kinematics, especially when bending or buckling of beams or walls occurs. The objective of the thesis is to develop a DVC technique for the measurement of displacement fields in cellular materials at the scale of their architecture. The proposed solution consists in assisting DVC by a weak elastic regularization using an automatic image-based mechanical model. The proposed method introduces a separation of scales above which DVC is dominant and below which it is assisted by image-based modeling. First, a numerical investigation and comparison of different techniques for building automatically a geometric and mechanical model from tomographic images is conducted. Two particular methods are considered: the finite element method (FEM) and the finite-cell method (FCM). The FCM is a fictitious domain method that consists in immersing the complex geometry in a high order structured grid and does not require meshing. In this context, various discretization parameters appear delicate to choose. In this work, these parameters are adjusted to obtain (a) the best possible accuracy (bounded by pixelation errors) while (b) ensuring minimal complexity. Concerning the ability of the mechanical image-based models to regularize DIC, several virtual experimentations are performed in two-dimensions in order to finely analyze the influence of the introduced regularization lengths for different input mechanical behaviors (elastic, elasto-plastic and geometrically non-linear) and in comparison with ground truth. We show that the method can estimate complex local displacement and strain fields with speckle-free low definition images, even in non-linear regimes such as local buckling. Finally a three-dimensional generalization is performed through the development of a DVC framework. It takes as an input the reconstructed volumes at the different deformation states of the material and constructs automatically the cellular micro-architeture geometry. It considers either an immersed structured B-spline grid of arbitrary order or a finite-element mesh. An experimental evidence is performed by measuring the complex kinematics of a polyurethane foam under compression during an in situ test

    Improved profile fitting and quantification of uncertainty in experimental measurements of impurity transport coefficients using Gaussian process regression

    Get PDF
    The need to fit smooth temperature and density profiles to discrete observations is ubiquitous in plasma physics, but the prevailing techniques for this have many shortcomings that cast doubt on the statistical validity of the results. This issue is amplified in the context of validation of gyrokinetic transport models (Holland et al 2009 Phys. Plasmas 16 052301), where the strong sensitivity of the code outputs to input gradients means that inadequacies in the profile fitting technique can easily lead to an incorrect assessment of the degree of agreement with experimental measurements. In order to rectify the shortcomings of standard approaches to profile fitting, we have applied Gaussian process regression (GPR), a powerful non-parametric regression technique, to analyse an Alcator C-Mod L-mode discharge used for past gyrokinetic validation work (Howard et al 2012 Nucl. Fusion 52 063002). We show that the GPR techniques can reproduce the previous results while delivering more statistically rigorous fits and uncertainty estimates for both the value and the gradient of plasma profiles with an improved level of automation. We also discuss how the use of GPR can allow for dramatic increases in the rate of convergence of uncertainty propagation for any code that takes experimental profiles as inputs. The new GPR techniques for profile fitting and uncertainty propagation are quite useful and general, and we describe the steps to implementation in detail in this paper. These techniques have the potential to substantially improve the quality of uncertainty estimates on profile fits and the rate of convergence of uncertainty propagation, making them of great interest for wider use in fusion experiments and modelling efforts.United States. Dept. of Energy. Office of Fusion Energy Sciences (Award DE-FC02-99ER54512)United States. Dept. of Energy. Office of Science (Contract DE-AC05-06OR23177)United States. Dept. of Energy. Office of Advanced Scientific Computing Research (Award DE-SC0007099

    Tensor B-spline numerical method for PDEs : a high performance approach

    Get PDF
    Solutions of Partial Differential Equations (PDEs) form the basis of many mathematical models in physics and medicine. In this work, a novel Tensor B-spline methodology for numerical solutions of linear second-order PDEs is proposed. The methodology applies the B-spline signal processing framework and computational tensor algebra in order to construct high-performance numerical solvers for PDEs. The method allows high-order approximations, is mesh-free, matrix-free and computationally and memory efficient. The first chapter introduces the main ideas of the Tensor B-spline method, depicts the main contributions of the thesis and outlines the thesis structure. The second chapter provides an introduction to PDEs, reviews the numerical methods for solving PDEs, introduces splines and signal processing techniques with B-splines, and describes tensors and the computational tensor algebra. The third chapter describes the principles of the Tensor B-spline methodology. The main aspects are 1) discretization of the PDE variational formulation via B-spline representation of the solution, the coefficients, and the source term, 2) introduction to the tensor B-spline kernels, 3) application of tensors and computational tensor algebra to the discretized variational formulation of the PDE, 4) tensor-based analysis of the problem structure, 5) derivation of the efficient computational techniques, and 6) efficient boundary processing and numerical integration procedures. The fourth chapter describes 1) different computational strategies of the Tensor B-spline solver and an evaluation of their performance, 2) the application of the method to the forward problem of the Optical Diffusion Tomography and an extensive comparison with the state-of-the-art Finite Element Method on synthetic and real medical data, 3) high-performance multicore CPU- and GPU-based implementations, and 4) the solution of large-scale problems on hardware with limited memory resources

    Machine Learning for Signal Reconstruction from Streaming Time-series Data

    Get PDF
    Papers I and II are extracted as separate files to meet IEEE publication policy for accepted manuscripts.Paper IV is extracted from the dissertation pending publication.Nowadays, deploying cyber-physical networked systems generates tremendous streams of data, with data rates increasing as time goes by. This trend is especially noticeable in several fairly automated sectors, such as energy or telecommunications. Compared to the last decades, this represents not only an additional large volume of data to explore and the need for more efficient and scalable data analysis methods but also raises additional challenges in the design and analysis of real-time streaming data processing algorithms. In many applications of interest, it is required to process a sequence of samples from multiple, possibly correlated, data time series that are acquired at different sampling rates and which may be quantized in amplitude at different resolutions. A commonly sought goal is to obtain a low-error signal reconstruction that can be uniformly resampled with a temporal resolution as fine as desired, hence facilitating subsequent data analyses. This Ph.D. thesis consists of a compendium of four papers that incrementally investigate the task of sequentially reconstructing a signal from a stream of multivariate time series of quantization intervals under several requirements encountered in practice and detailed next. First, we investigate how to track signals from streams of quantization intervals while enforcing low model complexity in the function estimation. Specifically, we explore the use of reproducing kernel Hilbert space-based online regression techniques expressly tailored for such a task. More specifically, the core techniques we devise and employ are influenced by the abundant theoretical and practical benefits in the literature about proximal operators and multiple kernel approaches. Second, we require the signal to be sequentially reconstructed, subject to smoothness constraints, and as soon as a data sample is available (zero-delay response). These well-motivated requirements appear in many practical problems, including online trajectory planning, real-time control systems, and high-speed digital-to-analog conversion. We address this challenge through a novel spline-based approach underpinned by a sequential decision-making framework and assisted with deep learning techniques. Specifically, we use recurrent neural networks to capture the temporal dependencies among data, helping to reduce the roughness of the reconstruction on average. Finally, we analyze the requirement of consistency, which amounts to exploiting all available information about the signal source and acquisition system to optimize some figure of reconstruction merit. In our context, consistency means guaranteeing that the reconstruction lies within the acquired quantization intervals. Consistency has been proven to entail a profitable-in-practice asymptotic error-rate decay as the sampling rate increases. Particularly, we investigate the impact of consistency on zero-delay reconstruction and also incorporate the idea of exploiting the spatiotemporal dependencies among multivariate signals.publishedVersio
    • …
    corecore