50 research outputs found

    Spline network modeling and fault classification of a heating ventilation and air-conditioning system

    Get PDF
    A spline network, that is an alternative to artificial neural networks, is introduced in this dissertation. This network has an input layer, a single hidden layer, and an output layer. Spline basis functions, with small support, are used as the activation functions. The network is used to model the steady state operation of a complex Heating Ventilation and Air-conditioning (HVAC) system. Real data was used to train the spline network. A neural network was also trained on the same set of data. Based on the training process, it is possible to conclude that when compared to artificial neural networks, the spline network is much faster to train, needed fewer input-output pairs, and had no convergence problems. The weights of the spline network are obtained by solving a set of linear equations;The spline network model of the HVAC system is used to detect faulty operation of the actual system. Once abnormal operation of the system is monitored, a fuzzy neural network is used to locate the faulty component. The fuzzy neural network is trained on data obtained by simulating fault scenarios. This network minimizes ambiguities at decision boundaries. The results of fault classification are presented in the dissertation

    B-splines for sparse grids : algorithms and application to higher-dimensional optimization

    Get PDF
    In simulation technology, computationally expensive objective functions are often replaced by cheap surrogates, which can be obtained by interpolation. Full grid interpolation methods suffer from the so-called curse of dimensionality, rendering them infeasible if the parameter domain of the function is higher-dimensional (four or more parameters). Sparse grids constitute a discretization method that drastically eases the curse, while the approximation quality deteriorates only insignificantly. However, conventional basis functions such as piecewise linear functions are not smooth (continuously differentiable). Hence, these basis functions are unsuitable for applications in which gradients are required. One example for such an application is gradient-based optimization, in which the availability of gradients greatly improves the speed of convergence and the accuracy of the results. This thesis demonstrates that hierarchical B-splines on sparse grids are well-suited for obtaining smooth interpolants for higher dimensionalities. The thesis is organized in two main parts: In the first part, we derive new B-spline bases on sparse grids and study their implications on theory and algorithms. In the second part, we consider three real-world applications in optimization: topology optimization, biomechanical continuum-mechanics, and dynamic portfolio choice models in finance. The results reveal that the optimization problems of these applications can be solved accurately and efficiently with hierarchical B-splines on sparse grids.In der Simulationstechnik werden zeitaufwendige Zielfunktionen oft durch einfache Surrogate ersetzt, die durch Interpolation gewonnen werden können. Vollgitter-Interpolationsmethoden leiden unter dem sogenannten Fluch der Dimensionalität, der sie unbrauchbar macht, falls der Parameterbereich der Funktion höherdimensional ist (vier oder mehr Parameter). Dünne Gitter sind eine Diskretisierungsmethode, die den Fluch drastisch lindert und die Approximationsqualität nur leicht verschlechtert. Leider sind konventionelle Basisfunktionen wie die stückweise linearen Funktionen nicht glatt (stetig differenzierbar). Daher sind sie für Anwendungen ungeeignet, in denen Gradienten benötigt werden. Ein Beispiel für eine solche Anwendung ist gradientenbasierte Optimierung, in der die Verfügbarkeit von Gradienten die Konvergenzgeschwindigkeit und die Ergebnisgenauigkeit deutlich verbessert. Diese Dissertation demonstriert, dass hierarchische B-Splines auf dünnen Gittern hervorragend geeignet sind, um glatte Interpolierende für höhere Dimensionalitäten zu erhalten. Die Dissertation ist in zwei Hauptteile gegliedert: Der erste Teil leitet neue B-Spline-Basen auf dünnen Gittern her und untersucht ihre Implikationen bezüglich Theorie und Algorithmen. Der zweite Teil behandelt drei Realwelt-Anwendungen aus der Optimierung: Topologieoptimierung, biomechanische Kontinuumsmechanik und Modelle der dynamischen Portfolio-Wahl in der Finanzmathematik. Die Ergebnisse zeigen, dass die Optimierungsprobleme dieser Anwendungen durch hierarchische B-Splines auf dünnen Gittern genau und effizient gelöst werden können

    A hybrid hair model using three dimensional fuzzy textures

    Get PDF
    Cataloged from PDF version of article.Human hair modeling and rendering have always been a challenging topic in computer graphics. The techniques for human hair modeling consist of explicit geometric models as well as volume density models. Recently, hybrid cluster models have also been successful in this subject. In this study, we present a novel three dimensional texture model called 3D Fuzzy Textures and algorithms to generate them. Then, we use the developed model along with a cluster model to give human hair complex hairstyles such as curly and wavy styles. Our model requires little user effort to model curly and wavy hair styles. With this study, we aim at eliminating the drawbacks of the volume density model and the cluster hair model with 3D fuzzy textures. A three dimensional cylindrical texture mapping function is introduced for mapping purposes. Current generation graphics hardware is utilized in the design of rendering system enabling high performance rendering.Aran, Medeni ErolM.S

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation consists of two parts that focus on two interrelated areas of Applied Mathematics. The first part explores fundamental properties and applications of functions with values in L-spaces. The second part is connected to Approximation Theory and dives deeper into the analysis of functions with values in specific classes of L-spaces (in particular, L-spaces of sets). In the first project devoted to the theory and numerical methods for the solution of integral equations, we explore linear Volterra and Fredholm integral equations for functions with values in L-spaces (which are generalizations of set-valued and fuzzy-valued functions). In this study, we prove the existence and uniqueness of the solution for such equations, suggest algorithms for finding approximate solutions, and study their convergence. The exploration of these equations is of great importance given the wide variety of their applications in biology (population modeling), physics (heat conduction), and engineering (feedback systems), among others. We extend the aforementioned results of existence and uniqueness to nonlinear equations. In addition, we study the dependence of solutions of such equations on variations in the data. In order to be able to better analyze the convergence of the suggested algorithms for the solutions of integral equations, we develop new results on the approximation of functions with values in L-spaces by adapted linear positive operators (Bernstein, Schoenberg, modified Schoenberg operators, and piecewise linear interpolation). The second project is devoted to problems of interpolation by generalized polynomials and splines for functions whose values lie in a specific L-space, namely a space of sets. Because the structure of such a space is richer than the structure of a general L-space, we have additional tools available (e.g., the support function of a set) which allow us to obtain deeper results for the approximation and interpolation of set-valued functions. We are working on defining various methods of approximation based on the support function of a set. Questions related to error estimates of the approximation of set-valued functions by those novel methods are also investigated

    Mathematical Methods, Modelling and Applications

    Get PDF
    This volume deals with novel high-quality research results of a wide class of mathematical models with applications in engineering, nature, and social sciences. Analytical and numeric, deterministic and uncertain dimensions are treated. Complex and multidisciplinary models are treated, including novel techniques of obtaining observation data and pattern recognition. Among the examples of treated problems, we encounter problems in engineering, social sciences, physics, biology, and health sciences. The novelty arises with respect to the mathematical treatment of the problem. Mathematical models are built, some of them under a deterministic approach, and other ones taking into account the uncertainty of the data, deriving random models. Several resulting mathematical representations of the models are shown as equations and systems of equations of different types: difference equations, ordinary differential equations, partial differential equations, integral equations, and algebraic equations. Across the chapters of the book, a wide class of approaches can be found to solve the displayed mathematical models, from analytical to numeric techniques, such as finite difference schemes, finite volume methods, iteration schemes, and numerical integration methods

    Automatic lumen segmentation in IVOCT images using binary morphological reconstruction

    Get PDF
    Abstract\ud \ud \ud \ud Background\ud Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses.\ud \ud \ud \ud Method\ud An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object.\ud \ud \ud \ud Results\ud The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1.\ud \ud \ud \ud Conclusions\ud In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation.São Paulo Research Foundation – Brazil ( FAPESP – Process Number: 2012/157212), National Council of Scientific and Technological Development, Brazil (CNPq), Heart Institute of São Paulo, Brazil (InCor), Biomedical Engineering Laboratory of the University of São Paulo, Brazil (LEBUSP). The unknown reviewers, who have made important contributions to this work.São Paulo Research Foundation – Brazil ( FAPESP – Process Number: 2012/15721-2), National Council of Scientific and Technological Development, Brazil (CNPq), Heart Institute of São Paulo, Brazil (InCor), Biomedical Engineering Laboratory of the University of São Paulo, Brazil (LEB-USP). The unknown reviewers, who have made important contributions to this work

    Teaching a Robot to Drive - A Skill Learning Inspired Approach

    Get PDF
    Roboter können unser Leben erleichtern, indem sie für uns unangenehme, oder sogar gefährliche Aufgaben übernehmen. Um sie effizient einsetzen zu können, sollten sie autonom, adaptiv und einfach zu instruieren sein. Traditionelle 'white-box'-Ansätze in der Robotik basieren auf dem Verständnis des Ingenieurs der unterliegenden physikalischen Struktur des gegebenen Problems. Ausgehend von diesem Verständnis kann der Ingenieur eine mögliche Lösung finden und es in dem System implementieren. Dieser Ansatz ist sehr mächtig, aber gleichwohl limitiert. Der wichtigste Nachteil ist, dass derart erstellte Systeme von vordefiniertem Wissen abhängen und deswegen jedes neue Verhalten den gleichen, teuren Entwicklungszyklus benötigt. Im Gegensatz dazu sind Menschen und einige andere Tiere nicht auf ihre angeborene Verhalten beschränkt, sondern können während ihrer Lebenszeit vielzählige weitere Fähigkeiten erwerben. Zusätzlich scheinen sie dazu kein detailliertes Wissen über den (physikalische) Ablauf einer gegebenen Aufgabe zu benötigen. Diese Eigenschaften sind auch für künstliche Systeme wünschenswert. Deswegen untersuchen wir in dieser Dissertation die Hypothese, dass Prinzipien des menschlichen Fähigkeitslernens zu alternativen Methoden für adaptive Systemkontrolle führen können. Wir untersuchen diese Hypothese anhand der Aufgabe des Autonomen Fahrens, welche ein klassiches Problem der Systemkontrolle darstellt und die Möglichkeit für vielfältige Applikationen bietet. Die genaue Aufgabe ist das Erlernen eines grundlegenden, antizipatorischen Fahrverhaltens von einem menschlichem Lehrer. Nachdem wir relevante Aspekte bezüglich des menschlichen Fähigkeitslernen aufgezeigt haben, und die Begriffe 'interne Modelle' und 'chunking' eingeführt haben, beschreiben wir die Anwendung dieser auf die gegebene Aufgabe. Wir realisieren chunking mit Hilfe einer Datenbank in welcher Beispiele menschlichen Fahreverhaltens gespeichert werden und mit Beschreibungen der visuell erfassten Strassentrajektorie verknüpft werden. Dies wird zunächst innerhalb einer Laborumgebung mit Hilfe eines Roboters verwirklicht und später, im Laufe des Europäischen DRIVSCO Projektes, auf ein echtes Auto übertragen. Wir untersuchen ausserdem das Erlernen visueller 'Vorwärtsmodelle', welche zu den internen Modellen gehören, sowie ihren Effekt auf die Kontrollperformanz beim Roboter. Das Hauptresultat dieser interdisziplinären und anwendungsorientierten Arbeit ist ein System, welches in der Lage ist als Antwort auf die visuell wahrgenommene Strassentrajektorie entsprechende Aktionspläne zu generieren, ohne das dazu metrische Informationen benötigt werden. Die vorhergesagten Aktionen in der Laborumgebung sind Lenken und Geschwindigkeit. Für das echte Auto Lenken und Beschleunigung, wobei die prediktive Kapazität des Systems für Letzteres beschränkt ist. D.h. der Roboter lernt autonomes Fahren von einem menschlichen Lehrer und das Auto lernt die Vorhersage menschlichen Fahrverhaltens. Letzteres wurde während der Begutachtung des Projektes duch ein internationales Expertenteam erfolgreich demonstriert. Das Ergebnis dieser Arbeit ist relevant für Anwendungen in der Roboterkontrolle und dabei besonders in dem Bereich intelligenter Fahrerassistenzsysteme

    Acta Polytechnica Hungarica 2006

    Get PDF
    corecore