67 research outputs found

    Suitably graded THB-spline refinement and coarsening: Towards an adaptive isogeometric analysis of additive manufacturing processes

    Get PDF
    In the present work we introduce a complete set of algorithms to efficiently perform adaptive refinement and coarsening by exploiting truncated hierarchical B-splines (THB-splines) defined on suitably graded isogeometric meshes, that are called admissible mesh configurations. We apply the proposed algorithms to two-dimensional linear heat transfer problems with localized moving heat source, as simplified models for additive manufacturing applications. We first verify the accuracy of the admissible adaptive scheme with respect to an overkilled solution, for then comparing our results with similar schemes which consider different refinement and coarsening algorithms, with or without taking into account grading parameters. This study shows that the THB-spline admissible solution delivers an optimal discretization for what concerns not only the accuracy of the approximation, but also the (reduced) number of degrees of freedom per time step. In the last example we investigate the capability of the algorithms to approximate the thermal history of the problem for a more complicated source path. The comparison with uniform and non-admissible hierarchical meshes demonstrates that also in this case our adaptive scheme returns the desired accuracy while strongly improving the computational efficiency.Comment: 20 pages, 12 figure

    Probabilistic Parametric Curves for Sequence Modeling

    Get PDF
    ReprĂ€sentationen sequenzieller Daten basieren in der Regel auf der Annahme, dass beobachtete Sequenzen Realisierungen eines unbekannten zugrundeliegenden stochastischen Prozesses sind. Die Bestimmung einer solchen ReprĂ€sentation wird ĂŒblicherweise als Lernproblem ausgelegt und ergibt ein Sequenzmodell. Das Modell muss in diesem Zusammenhang in der Lage sein, die multimodale Natur der Daten zu erfassen, ohne einzelne Modi zu vermischen. Zur Modellierung eines zugrundeliegenden stochastischen Prozesses lernen hĂ€ufig verwendete, auf neuronalen Netzen basierende AnsĂ€tze entweder eine Wahrscheinlichkeitsverteilung zu parametrisieren oder eine implizite ReprĂ€sentation unter Verwendung stochastischer Eingaben oder Neuronen. Dabei integrieren diese Modelle in der Regel Monte Carlo Verfahren oder andere NĂ€herungslösungen, um die ParameterschĂ€tzung und probabilistische Inferenz zu ermöglichen. Dies gilt sogar fĂŒr regressionsbasierte AnsĂ€tze basierend auf Mixture Density Netzwerken, welche ebenso Monte Carlo Simulationen zur multi-modalen Inferenz benötigen. Daraus ergibt sich eine ForschungslĂŒcke fĂŒr vollstĂ€ndig regressionsbasierte AnsĂ€tze zur ParameterschĂ€tzung und probabilistischen Inferenz. Infolgedessen stellt die vorliegende Arbeit eine probabilistische Erweiterung fĂŒr BĂ©zierkurven (N\mathcal{N}-Kurven) als Basis fĂŒr die Modellierung zeitkontinuierlicher stochastischer Prozesse mit beschrĂ€nkter Indexmenge vor. Das vorgestellte Modell, bezeichnet als N\mathcal{N}-Kurven - Modell, basiert auf Mixture Density Netzwerken (MDN) und BĂ©zierkurven, welche Kurvenkontrollpunkte als normalverteilt annehmen. Die Verwendung eines MDN-basierten Ansatzes steht im Einklang mit aktuellen Versuchen, UnsicherheitsschĂ€tzung als Regressionsproblem auszulegen, und ergibt ein generisches Modell, welches allgemein als Basismodell fĂŒr die probabilistische Sequenzmodellierung einsetzbar ist. Ein wesentlicher Vorteil des Modells ist unter anderem die Möglichkeit glatte, multi-modale Vorhersagen in einem einzigen Inferenzschritt zu generieren, ohne dabei Monte Carlo Simulationen zu benötigen. Durch die Verwendung von BĂ©zierkurven als Basis, kann das Modell außerdem theoretisch fĂŒr beliebig hohe Datendimensionen verwendet werden, indem die Kontrollpunkte in einen hochdimensionalen Raum eingebettet werden. Um die durch den Fokus auf beschrĂ€nkte Indexmengen existierenden theoretischen EinschrĂ€nkungen aufzuheben, wird zusĂ€tzlich eine konzeptionelle Erweiterung fĂŒr das N\mathcal{N}-Kurven - Modell vorgestellt, mit der unendliche stochastische Prozesse modelliert werden können. Wesentliche Eigenschaften des vorgestellten Modells und dessen Erweiterung werden auf verschiedenen Beispielen zur Sequenzsynthese gezeigt. Aufgrund der hinreichenden Anwendbarkeit des N\mathcal{N}-Kurven - Modells auf die meisten AnwendungsfĂ€lle, wird dessen Tauglichkeit umfangreich auf verschiedenen MehrschrittprĂ€diktionsaufgaben unter Verwendung realer Daten evaluiert. ZunĂ€chst wird das Modell gegen hĂ€ufig verwendete probabilistische Sequenzmodelle im Kontext der Vorhersage von FußgĂ€ngertrajektorien evaluiert, wobei es sĂ€mtliche Vergleichsmodelle ĂŒbertrifft. In einer qualitativen Auswertung wird das Verhalten des Modells in einem Vorhersagekontext untersucht. Außerdem werden Schwierigkeiten bei der Bewertung probabilistischer Sequenzmodelle in einem multimodalen Setting diskutiert. DarĂŒber hinaus wird das Modell im Kontext der Vorhersage menschlicher Bewegungen angewendet, um die angestrebte Skalierbarkeit des Modells auf höherdimensionale Daten zu bewerten. Bei dieser Aufgabe ĂŒbertrifft das Modell allgemein verwendete einfache und auf neuronalen Netzen basierende Grundmodelle und ist in verschiedenen Situationen auf Augenhöhe mit verschiedenen State-of-the-Art-Modellen, was die Einsetzbarkeit in diesem höherdimensionalen Beispiel zeigt. Des Weiteren werden Schwierigkeiten bei der KovarianzschĂ€tzung und die GlĂ€ttungseigenschaften des N\mathcal{N}-Kurven - Modells diskutiert

    Probabilistic Parametric Curves for Sequence Modeling

    Get PDF
    This work proposes a probabilistic extension to BĂ©zier curves as a basis for effectively modeling stochastic processes with a bounded index set. The proposed stochastic process model is based on Mixture Density Networks and BĂ©zier curves with Gaussian random variables as control points. A key advantage of this model is given by the ability to generate multi-mode predictions in a single inference step, thus avoiding the need for Monte Carlo simulation

    Shape Modeling with Spline Partitions

    Full text link
    Shape modelling (with methods that output shapes) is a new and important task in Bayesian nonparametrics and bioinformatics. In this work, we focus on Bayesian nonparametric methods for capturing shapes by partitioning a space using curves. In related work, the classical Mondrian process is used to partition spaces recursively with axis-aligned cuts, and is widely applied in multi-dimensional and relational data. The Mondrian process outputs hyper-rectangles. Recently, the random tessellation process was introduced as a generalization of the Mondrian process, partitioning a domain with non-axis aligned cuts in an arbitrary dimensional space, and outputting polytopes. Motivated by these processes, in this work, we propose a novel parallelized Bayesian nonparametric approach to partition a domain with curves, enabling complex data-shapes to be acquired. We apply our method to HIV-1-infected human macrophage image dataset, and also simulated datasets sets to illustrate our approach. We compare to support vector machines, random forests and state-of-the-art computer vision methods such as simple linear iterative clustering super pixel image segmentation. We develop an R package that is available at \url{https://github.com/ShufeiGe/Shape-Modeling-with-Spline-Partitions}

    Nonlinear profile monitoring using spline functions

    Get PDF
    [[abstract]]In this study, two new integrated control charts, named T2-MAE chart and MS-MAE chart, are introduced for monitoring the quality of a process when the mathematical form of nonlinear profile model for quality measure is complicated and unable to be specified. The T2-MAE chart is composed of two memoryless-type control charts and the MS-MAE chart is composed of one memory-type and one memoryless-type control charts. The normality assumption of error terms in the nonlinear profile model for both proposed control charts are extended to a generalized model. An intensive simulation study is conducted to evaluate the performance of the T2-MAE and MS-MAE charts. Simulation results show that the MS-MAE chart outperforms the T2-MAE chart with less false alarms during the Phase I monitoring. Moreover, the MS-MAE chart is sensitive to different shifts on the model parameters and profile shape during the Phase II monitoring. An example about the vertical density profile is used for illustration.[[notice]]èŁœæ­ŁćźŒ

    High-performance geometric vascular modelling

    Get PDF
    Image-based high-performance geometric vascular modelling and reconstruction is an essential component of computer-assisted surgery on the diagnosis, analysis and treatment of cardiovascular diseases. However, it is an extremely challenging task to efficiently reconstruct the accurate geometric structures of blood vessels out of medical images. For one thing, the shape of an individual section of a blood vessel is highly irregular because of the squeeze of other tissues and the deformation caused by vascular diseases. For another, a vascular system is a very complicated network of blood vessels with different types of branching structures. Although some existing vascular modelling techniques can reconstruct the geometric structure of a vascular system, they are either time-consuming or lacking sufficient accuracy. What is more, these techniques rarely consider the interior tissue of the vascular wall, which consists of complicated layered structures. As a result, it is necessary to develop a better vascular geometric modelling technique, which is not only of high performance and high accuracy in the reconstruction of vascular surfaces, but can also be used to model the interior tissue structures of the vascular walls.This research aims to develop a state-of-the-art patient-specific medical image-based geometric vascular modelling technique to solve the above problems. The main contributions of this research are:- Developed and proposed the Skeleton Marching technique to reconstruct the geometric structures of blood vessels with high performance and high accuracy. With the proposed technique, the highly complicated vascular reconstruction task is reduced to a set of simple localised geometric reconstruction tasks, which can be carried out in a parallel manner. These locally reconstructed vascular geometric segments are then combined together using shape-preserving blending operations to faithfully represent the geometric shape of the whole vascular system.- Developed and proposed the Thin Implicit Patch method to realistically model the interior geometric structures of the vascular tissues. This method allows the multi-layer interior tissue structures to be embedded inside the vascular wall to illustrate the geometric details of the blood vessel in real world

    2D and 3D surface image processing algorithms and their applications

    Get PDF
    This doctoral dissertation work aims to develop algorithms for 2D image segmentation application of solar filament disappearance detection, 3D mesh simplification, and 3D image warping in pre-surgery simulation. Filament area detection in solar images is an image segmentation problem. A thresholding and region growing combined method is proposed and applied in this application. Based on the filament area detection results, filament disappearances are reported in real time. The solar images in 1999 are processed with this proposed system and three statistical results of filaments are presented. 3D images can be obtained by passive and active range sensing. An image registration process finds the transformation between each pair of range views. To model an object, a common reference frame in which all views can be transformed must be defined. After the registration, the range views should be integrated into a non-redundant model. Optimization is necessary to obtain a complete 3D model. One single surface representation can better fit to the data. It may be further simplified for rendering, storing and transmitting efficiently, or the representation can be converted to some other formats. This work proposes an efficient algorithm for solving the mesh simplification problem, approximating an arbitrary mesh by a simplified mesh. The algorithm uses Root Mean Square distance error metric to decide the facet curvature. Two vertices of one edge and the surrounding vertices decide the average plane. The simplification results are excellent and the computation speed is fast. The algorithm is compared with six other major simplification algorithms. Image morphing is used for all methods that gradually and continuously deform a source image into a target image, while producing the in-between models. Image warping is a continuous deformation of a: graphical object. A morphing process is usually composed of warping and interpolation. This work develops a direct-manipulation-of-free-form-deformation-based method and application for pre-surgical planning. The developed user interface provides a friendly interactive tool in the plastic surgery. Nose augmentation surgery is presented as an example. Displacement vector and lattices resulting in different resolution are used to obtain various deformation results. During the deformation, the volume change of the model is also considered based on a simplified skin-muscle model

    A new calibration method for charm jet identification validated with proton-proton collision events at vs = 13TeV

    Get PDF
    Many measurements at the LHC require efficient identification of heavy-flavour jets, i.e. jets originating from bottom (b) or charm (c) quarks. An overview of the algorithms used to identify c jets is described and a novel method to calibrate them is presented. This new method adjusts the entire distributions of the outputs obtained when the algorithms are applied to jets of different flavours. It is based on an iterative approach exploiting three distinct control regions that are enriched with either b jets, c jets, or light-flavour and gluon jets. Results are presented in the form of correction factors evaluated using proton-proton collision data with an integrated luminosity of 41.5 fb-1 at ?s = 13 TeV, collected by the CMS experiment in 2017. The closure of the method is tested by applying the measured correction factors on simulated data sets and checking the agreement between the adjusted simulation and collision data. Furthermore, a validation is performed by testing the method on pseudodata, which emulate various mismodelling conditions. The calibrated results enable the use of the full distributions of heavy-flavour identification algorithm outputs, e.g. as inputs to machine-learning models. Thus, they are expected to increase the sensitivity of future physics analyses

    A new calibration method for charm jet identification validated with proton-proton collision events at root s=13 TeV

    Get PDF
    Many measurements at the LHC require efficient identification of heavy-flavour jets, i.e. jets originating from bottom (b) or charm (c) quarks. An overview of the algorithms used to identify c jets is described and a novel method to calibrate them is presented. This new method adjusts the entire distributions of the outputs obtained when the algorithms are applied to jets of different flavours. It is based on an iterative approach exploiting three distinct control regions that are enriched with either b jets, c jets, or light-flavour and gluon jets. Results are presented in the form of correction factors evaluated using proton-proton collision data with an integrated luminosity of 41.5 fb(-1) at root s = 13 TeV, collected by the CMS experiment in 2017. The closure of the method is tested by applying the measured correction factors on simulated data sets and checking the agreement between the adjusted simulation and collision data. Furthermore, a validation is performed by testing the method on pseudodata, which emulate various mismodelling conditions. The calibrated results enable the use of the full distributions of heavy-flavour identification algorithm outputs, e.g. as inputs to machine-learning models. Thus, they are expected to increase the sensitivity of future physics analyses.Peer reviewe
    • 

    corecore