9 research outputs found

    Bayesian Prediction of Pre-Stressed Concrete Bridge Deflection Using Finite Element Analysis

    Get PDF
    Vertical deflection has been emphasized as an important safety indicator in the management of railway bridges. Therefore, various standards and studies have suggested physics-based models for predicting the time-dependent deflection of railway bridges. However, these approaches may be limited by model errors caused by uncertainties in various factors, such as material properties, creep coefficient, and temperature. This study proposes a new Bayesian method that employs both a finite element model and actual measurement data. To overcome the limitations of an imperfect finite element model and a shortage of data, Gaussian process regression is introduced and modified to consider both, the finite element analysis results and actual measurement data. In addition, the probabilistic prediction model can be updated whenever additional measurement data is available. In this manner, a probabilistic prediction model, that is customized to a target bridge, can be obtained. The proposed method is applied to a pre-stressed concrete railway bridge in the construction stage in the Republic of Korea, as an example of a bridge for which accurate time-dependent deflection is difficult to predict, and measurement data are insufficient. Probabilistic prediction models are successfully derived by applying the proposed method, and the corresponding prediction results agree with the actual measurements, even though the bridge experienced large downward deflections during the construction stage. In addition, the practical uses of the prediction models are discussed

    A Framework for Evaluating Approximation Methods for Gaussian Process Regression

    Get PDF
    Gaussian process (GP) predictors are an important component of many Bayesian approaches to machine learning. However, even a straightforward implementation of Gaussian process regression (GPR) requires O(n^2) space and O(n^3) time for a data set of n examples. Several approximation methods have been proposed, but there is a lack of understanding of the relative merits of the different approximations, and in what situations they are most useful. We recommend assessing the quality of the predictions obtained as a function of the compute time taken, and comparing to standard baselines (e.g., Subset of Data and FITC). We empirically investigate four different approximation algorithms on four different prediction problems, and make our code available to encourage future comparisons

    Example-based learning for single-image super-resolution and JPEG artifact removal

    Get PDF
    This paper proposes a framework for single-image super-resolution and JPEG artifact removal. The underlying idea is to learn a map from input low-quality images (suitably preprocessed low-resolution or JPEG encoded images) to target high-quality images based on example pairs of input and output images. To retain the complexity of the resulting learning problem at a moderate level, a patch-based approach is taken such that kernel ridge regression (KRR) scans the input image with a small window (patch) and produces a patchvalued output for each output pixel location. These constitute a set of candidate images each of which reflects different local information. An image output is then obtained as a convex combination of candidates for each pixel based on estimated confidences of candidates. To reduce the time complexity of training and testing for KRR, a sparse solution is found by combining the ideas of kernel matching pursuit and gradient descent. As a regularized solution, KRR leads to a better generalization than simply storing the examples as it has been done in existing example-based super-resolution algorithms and results in much less noisy images. However, this may introduce blurring and ringing artifacts around major edges as sharp changes are penalized severely. A prior model of a generic image class which takes into account the discontinuity property of images is adopted to resolve this problem. Comparison with existing super-resolution and JPEG artifact removal methods shows the effectiveness of the proposed method. Furthermore, the proposed method is generic in that it has the potential to be applied to many other image enhancement applications

    A matching pursuit approach to sparse Gaussian process regression

    No full text
    In this paper we propose a new basis selection criterion for building sparse GP regression models that provides promising gains in accuracy as well as efficiency over previous methods. Our algorithm is much faster than that of Smola and Bartlett, while, in generalization it greatly outperforms the information gain approach proposed by Seeger et al, especially on the quality of predictive distributions.

    Enhanced Learning Strategies for Tactile Shape Estimation and Grasp Planning of Unknown Objects

    Get PDF
    Grasping is one of the key capabilities for a robot operating and interacting with humans in a real environment. The conventional approaches require accurate information on both object shape and robotic system modeling. The performance, therefore, can be easily influenced by any noise sensor data or modeling errors. Moreover, identifying the shape of an unknown object under some vision-denied conditions is still a challenging problem in the robotics eld. To address this issue, this thesis investigates the estimation of unknown object shape using tactile exploration and the task-oriented grasp planning for a novel object using enhanced learning techniques. In order to rapidly estimate the shape of an unknown object, this thesis presents a novel multi- fidelity-based optimal sampling method which attempts to improve the existing shape estimation via tactile exploration. Gaussian process regression is used for implicit surface modeling with sequential sampling strategy. The main objective is to make the process of sample point selection more efficient and systematic such that the unknown shape can be estimated fast and accurately with highly limited sample points (e.g., less than 1% of number of data set for the true shape). Specifically, we propose to select the next best sample point based on two optimization criteria: 1) the mutual information (MI) for uncertainty reduction, and 2) the local curvature for fidelity enhancement. The combination of these two objectives leads to an optimal sampling process that balances between the exploration of the whole shape and the exploitation of the local area where the higher fidelity (or more sampling) is required. Simulation and experimental results successfully demonstrate the advantage of the proposed method in terms of estimation speed and accuracy over the conventional one, which allows us to reconstruct recognizable 3D shapes using only around optimally selected 0.4% of the original data set. With the available object shape, this thesis also introduces a knowledge-based approach to quickly generate a task-oriented grasp for a novel object. A comprehensive training dataset which consists of specific tasks and geometrical and physical knowledge of grasping is built up from physical experiment. To analyze and e fficiently utilize the training data, a multi-step clustering algorithm is developed based on a self-organizing map. A number of representative grasps are then selected from the entire training dataset and used to generate a suitable grasp for a novel object. The number of representative grasps is automatically determined using the proposed auto-growing method. In addition, to improve the accuracy and efficiency of the proposed clustering algorithm, we also develop a novel method to localize the initial centroids while capturing the outliers. The results of simulation illustrate that the proposed initialization method and the auto-growing method outperform some conventional approaches in terms of accuracy and efficiency. Furthermore, the proposed knowledge-based grasp planning is also validated on a real robot. The results demonstrate the effectiveness of this approach to generate task-oriented grasps for novel objects

    Cross-systems Personalisierung

    Get PDF
    The World Wide Web provides access to a wealth of information and services to a huge and heterogeneous user population on a global scale. One important and successful design mechanism in dealing with this diversity of users is to personalize Web sites and services, i.e. to customize system content, characteristics, or appearance with respect to a specific user. Each system independently builds up user proïŹles and uses this information to personalize the service offering. Such isolated approaches have two major drawbacks: firstly, investments of users in personalizing a system either through explicit provision of information or through long and regular use are not transferable to other systems. Secondly, users have little or no control over the information that defines their profile, since user data are deeply buried in personalization engines running on the server side. Cross system personalization (CSP) (Mehta, Niederee, & Stewart, 2005) allows for sharing information across different information systems in a user-centric way and can overcome the aforementioned problems. Information about users, which is originally scattered across multiple systems, is combined to obtain maximum leverage and reuse of information. Our initial approaches to cross system personalization relied on each user having a unified profile which different systems can understand. The unified profile contains facets modeling aspects of a multidimensional user which is stored inside a "Context Passport" that the user carries along in his/her journey across information space. The user’s Context Passport is presented to a system, which can then understand the context in which the user wants to use the system. The basis of ’understanding’ in this approach is of a semantic nature, i.e. the semantics of the facets and dimensions of the uniïŹed proïŹle are known, so that the latter can be aligned with the proïŹles maintained internally at a specific site. The results of the personalization process are then transfered back to the user’s Context Passport via a protocol understood by both parties. The main challenge in this approach is to establish some common and globally accepted vocabulary and to create a standard every system will comply with. Machine Learning techniques provide an alternative approach to enable CSP without the need of accepted semantic standards or ontologies. The key idea is that one can try to learn dependencies between proïŹles maintained within one system and profiles maintained within a second system based on data provided by users who use both systems and who are willing to share their proïŹles across systems – which we assume is in the interest of the user. Here, instead of requiring a common semantic framework, it is only required that a sufficient number of users cross between systems and that there is enough regularity among users that one can learn within a user population, a fact that is commonly exploited in collaborative filtering. In this thesis, we aim to provide a principled approach towards achieving cross system personalization. We describe both semantic and learning approaches, with a stronger emphasis on the learning approach. We also investigate the privacy and scalability aspects of CSP and provide solutions to these problems. Finally, we also explore in detail the aspect of robustness in recommender systems. We motivate several approaches for robustifying collaborative filtering and provide the best performing algorithm for detecting malicious attacks reported so far.Die Personalisierung von Software Systemen ist von stetig zunehmender Bedeutung, insbesondere im Zusammenhang mit Web-Applikationen wie Suchmaschinen, Community-Portalen oder Electronic Commerce Sites, die große, stark diversifizierte Nutzergruppen ansprechen. Da explizite Personalisierung typischerweise mit einem erheblichen zeitlichem Aufwand fĂŒr den Nutzer verbunden ist, greift man in vielen Applikationen auf implizite Techniken zur automatischen Personalisierung zurĂŒck, insbesondere auf Empfehlungssysteme (Recommender Systems), die typischerweise Methoden wie das Collaborative oder Social Filtering verwenden. WĂ€hrend diese Verfahren keine explizite Erzeugung von Benutzerprofilen mittels Beantwortung von Fragen und explizitem Feedback erfordern, ist die QualitĂ€t der impliziten Personalisierung jedoch stark vom verfĂŒgbaren Datenvolumen, etwa Transaktions-, Query- oder Click-Logs, abhĂ€ngig. Ist in diesem Sinne von einem Nutzer wenig bekannt, so können auch keine zuverlĂ€ssigen persönlichen Anpassungen oder Empfehlungen vorgenommen werden. Die vorgelegte Dissertation behandelt die Frage, wie Personalisierung ĂŒber Systemgrenzen hinweg („cross system“) ermöglicht und unterstĂŒtzt werden kann, wobei hauptsĂ€chlich implizite Personalisierungstechniken, aber eingeschrĂ€nkt auch explizite Methodiken wie der semantische Context Passport diskutiert werden. Damit behandelt die Dissertation eine wichtige Forschungs-frage von hoher praktischer Relevanz, die in der neueren wissenschaftlichen Literatur zu diesem Thema nur recht unvollstĂ€ndig und unbefriedigend gelöst wurde. Automatische Empfehlungssysteme unter Verwendung von Techniken des Social Filtering sind etwas seit Mitte der 90er Jahre mit dem Aufkommen der ersten E-Commerce Welle popularisiert orden, insbesondere durch Projekte wie Information Tapistery, Grouplens und Firefly. In den spĂ€ten 90er Jahren und Anfang dieses Jahrzehnts lag der Hauptfokus der Forschungsliteratur dann auf verbesserten statistischen Verfahren und fortgeschrittenen Inferenz-Methodiken, mit deren Hilfe die impliziten Beobachtungen auf konkrete Anpassungs- oder Empfehlungsaktionen abgebildet werden können. In den letzten Jahren sind vor allem Fragen in den Vordergrund gerĂŒckt, wie Personalisierungssysteme besser auf die praktischen Anforderungen bestimmter Applikationen angepasst werden können, wobei es insbesondere um eine geeignete Anpassung und Erweiterung existierender Techniken geht. In diesem Rahmen stellt sich die vorgelegte Arbeit

    Rekonstruktion, Analyse und Editierung dynamisch deformierter 3D-OberflÀchen

    Get PDF
    Dynamically deforming 3D surfaces play a major role in computer graphics. However, producing time-varying dynamic geometry at ever increasing detail is a time-consuming and costly process, and so a recent trend is to capture geometry data directly from the real world. In the first part of this thesis, I propose novel approaches for this research area. These approaches capture dense dynamic 3D surfaces from multi-camera systems in a particularly robust and accurate way. This provides highly realistic dynamic surface models for phenomena like moving garments and bulging muscles. However, re-using, editing, or otherwise analyzing dynamic 3D surface data is not yet conveniently possible. To close this gap, the second part of this dissertation develops novel data-driven modeling and animation approaches. I first show a supervised data-driven approach for modeling human muscle deformations that scales to huge datasets and provides fine-scale, anatomically realistic deformations at high quality not attainable by previous methods. I then extend data-driven modeling to the unsupervised case, providing editing tools for a wider set of input data ranging from facial performance capture and full-body motion to muscle and cloth deformation. To this end, I introduce the concepts of sparsity and locality within a mathematical optimization framework. I also explore these concepts for constructing shape-aware functions that are useful for static geometry processing, registration, and localized editing.Dynamisch deformierbare 3D-OberflĂ€chen spielen in der Computergrafik eine zentrale Rolle. Die Erstellung der fĂŒr Computergrafik-Anwendungen benötigten, hochaufgelösten und zeitlich verĂ€nderlichen OberflĂ€chengeometrien ist allerdings Ă€ußerst arbeitsintensiv. Aus dieser Problematik heraus hat sich der Trend entwickelt, OberflĂ€chendaten direkt aus Aufnahmen der echten Welt zu erfassen. Dazu nötige 3D-Rekonstruktionsverfahren werden im ersten Teil der Arbeit entwickelt. Die vorgestellten, neuartigen Verfahren erlauben die Erfassung dynamischer 3D-OberflĂ€chen aus Mehrkamera-Aufnahmen bei hoher VerlĂ€sslichkeit und PrĂ€zision. Auf diese Weise können detaillierte OberflĂ€chenmodelle von PhĂ€nomenen wie in Bewegung befindliche Kleidung oder sich anspannende Muskeln erfasst werden. Aber auch die Wiederverwendung, Bearbeitung und Analyse derlei gewonnener 3D-OberflĂ€chendaten ist aktuell noch nicht auf eine einfache Art und Weise möglich. Um diese LĂŒcke zu schließen beschĂ€ftigt sich der zweite Teil der Arbeit mit der datengetriebenen Modellierung und Animation. ZunĂ€chst wird ein Ansatz fĂŒr das ĂŒberwachte Lernen menschlicher Muskel-Deformationen vorgestellt. Dieses neuartige Verfahren ermöglicht eine datengetriebene Modellierung mit besonders umfangreichen DatensĂ€tzen und liefert anatomisch-realistische Deformationseffekte. Es ĂŒbertrifft damit die Genauigkeit frĂŒherer Methoden. Im nĂ€chsten Teil beschĂ€ftigt sich die Dissertation mit dem unĂŒberwachten Lernen aus 3D-OberflĂ€chendaten. Es werden neuartige Werkzeuge vorgestellt, die eine weitreichende Menge an Eingabedaten verarbeiten können, von aufgenommenen Gesichtsanimationen ĂŒber Ganzkörperbewegungen bis hin zu Muskel- und Kleidungsdeformationen. Um diese Anwendungsbreite zu erreichen stĂŒtzt sich die Arbeit auf die allgemeinen Konzepte der SpĂ€rlichkeit und LokalitĂ€t und bettet diese in einen mathematischen Optimierungsansatz ein. Abschließend zeigt die vorliegende Arbeit, wie diese Konzepte auch fĂŒr die Konstruktion von oberflĂ€chen-adaptiven Basisfunktionen ĂŒbertragen werden können. Dadurch können Anwendungen fĂŒr die Verarbeitung, Registrierung und Bearbeitung statischer OberflĂ€chenmodelle erschlossen werden
    corecore