1,121 research outputs found

    INTREPID Tephra-II: - 1307F

    Get PDF
    The INTREPID Tephra project, “Enhancing tephrochronology as a global research tool through improved fingerprinting and correlation techniques and uncertainty modelling”, was an overarching project of the international community of tephrochronologists of the International Focus Group on Tephrochronology and Volcanism (INTAV), which in turn lies under the auspices of INQUA’s Stratigraphy and Chronology Commission (SACCOM). INTREPID’s main aim has been to advance our understanding and efficacy in fingerprinting, correlating, and dating techniques, and to evaluate and quantify uncertainty in tephrochronology, and thus enhance our ability to provide the best possible linking, dating and synchronising tool for a wide range of Quaternary research projects around the world. A second aim has been to re-build the global capability of tephrochronology for future research endeavours through mentoring and encouragement of emerging researchers in the discipline

    Project 0907: INTREPID – Enhancing tephrochronology as a global research tool through improved fingerprinting and correlation techniques and uncertainty modelling

    Get PDF
    In May, 2010, the inter-congress meeting of the INQUA International focus group on tephrochronology and volcanism (INTAV) was held in Kirishima City, southern Kyushu, Japan. INTAV was formed in 2007 at the International Union for Quaternary Research (INQUA) congress held in Cairns. It replaced SCOTAV (Sub-commission on tephrochronology and volcanism), COT (Commission on tephrochronology), and earlier tephrarelated research groups dating back to the 1960s. Previous meetings of the group in the past two decades were held in the Yukon Territory, Canada (2005), France (1998), New Zealand (1994), and USA (1990). The venue for the 2010 meeting was the main hall of the Kokobu Civic Centre in Kirishima City, which was very generously provided free of charge by the Kirishima authorities in return partly for the delivery of two public lectures, one by David Lowe (“Connecting with our past: using tephras and archaeology to date the Polynesian settlement of Aotearoa/New Zealand”) and the other by Hiroshi Machida (“Widespread tephras originating from Kagoshima occurring in northeast Asia and adjacent seas”), on Sunday 9 May. Participants were treated to a personal welcome by the Mayor of Kirishima City, Shuji Maeda, followed by what appeared to be a very special (and delicious) banquet. However, this spread turned out to be standard lunch and dinner fare provided by the centre’s cafeteria and was enjoyed by participants throughout the meeting

    An operational approach to graphical uncertainty modelling

    Get PDF

    Credal Networks under Epistemic Irrelevance

    Get PDF
    A credal network under epistemic irrelevance is a generalised type of Bayesian network that relaxes its two main building blocks. On the one hand, the local probabilities are allowed to be partially specified. On the other hand, the assessments of independence do not have to hold exactly. Conceptually, these two features turn credal networks under epistemic irrelevance into a powerful alternative to Bayesian networks, offering a more flexible approach to graph-based multivariate uncertainty modelling. However, in practice, they have long been perceived as very hard to work with, both theoretically and computationally. The aim of this paper is to demonstrate that this perception is no longer justified. We provide a general introduction to credal networks under epistemic irrelevance, give an overview of the state of the art, and present several new theoretical results. Most importantly, we explain how these results can be combined to allow for the design of recursive inference methods. We provide numerous concrete examples of how this can be achieved, and use these to demonstrate that computing with credal networks under epistemic irrelevance is most definitely feasible, and in some cases even highly efficient. We also discuss several philosophical aspects, including the lack of symmetry, how to deal with probability zero, the interpretation of lower expectations, the axiomatic status of graphoid properties, and the difference between updating and conditioning

    Uncertainty modelling in power spectrum estimation of environmental processes

    Get PDF
    For efficient reliability analysis of buildings and structures, robust load models are required in stochastic dynamics, which can be estimated in particular from environmental processes, such as earthquakes or wind loads. To determine the response behaviour of a dynamic system under such loads, the power spectral density (PSD) function is a widely used tool for identifying the frequency components and corresponding amplitudes of environmental processes. Since the real data records required for this purpose are often subject to aleatory and epistemic uncertainties, and the PSD estimation process itself can induce further uncertainties, a rigorous quantification of these is essential, as otherwise a highly inaccurate load model could be generated which may yield in misleading simulation results. A system behaviour that is actually catastrophic can thus be shifted into an acceptable range, classifying the system as safe even though it is exposed to a high risk of damage or collapse. To address these issues, alternative loading models are proposed using probabilistic and non-deterministic models, that are able to efficiently account for these uncertainties and to model the loadings accordingly. Various methods are used in the generation of these load models, which are selected in particular according to the characteristic of the data and the number of available records. In case multiple data records are available, reliable statistical information can be extracted from a set of similar PSD functions that differ, for instance, only slightly in shape and peak frequency. Based on these statistics, a PSD function model is derived utilising subjective probabilities to capture the epistemic uncertainties and represent this information effectively. The spectral densities are characterised as random variables instead of employing discrete values, and thus the PSD function itself represents a non-stationary random process comprising a range of possible valid PSD functions for a given data set. If only a limited amount of data records is available, it is not possible to derive such reliable statistical information. Therefore, an interval-based approach is proposed that determines only an upper and lower bound and does not rely on any distribution within these bounds. A set of discrete-valued PSD functions is transformed into an interval-valued PSD function by optimising the weights of pre-derived basis functions from a Radial Basis Function Network such that they compose an upper and lower bound that encompasses the data set. Therefore, a range of possible values and system responses are identified rather than discrete values, which are able to quantify the epistemic uncertainties. When generating such a load model using real data records, the problem can arise that the individual records exhibit a high spectral variance in the frequency domain and therefore differ too much from each other, although they appear to be similar in the time domain. A load model derived from these data may not cover the entire spectral range and is therefore not representative. The data are therefore grouped according to their similarity using the Bhattacharyya distance and k-means algorithm, which may generate two or more load models from the entire data set. These can be applied separately to the structure under investigation, leading to more accurate simulation results. This approach can also be used to estimate the spectral similarity of individual data sets in the frequency domain, which is particularly relevant for the load models mentioned above. If the uncertainties are modelled directly in the time signal, it can be a challenging task to transform them efficiently into the frequency domain. Such a signal may consist only of reliable bounds in which the actual signal lies. A method is presented that can automatically propagate this interval uncertainty through the discrete Fourier transform, obtaining the exact bounds on the Fourier amplitude and an estimate of the PSD function. The method allows such an interval signal to be propagated without making assumptions about the dependence and distribution of the error over the time steps. These novel representations of load models are able to quantify epistemic uncertainties inherent in real data records and induced due to the PSD estimation process. The strengths and advantages of these approaches in practice are demonstrated by means of several numerical examples concentrated in the field of stochastic dynamics.Für eine effiziente Zuverlässigkeitsanalyse von Gebäuden und Strukturen sind robuste Belastungsmodelle in der stochastischen Dynamik erforderlich, die insbesondere aus Umweltprozessen wie Erdbeben oder Windlasten geschätzt werden können. Um das Antwortverhalten eines dynamischen Systems unter solchen Belastungen zu bestimmen, ist die Funktion der Leistungsspektraldichte (PSD) ein weit verbreitetes Werkzeug zur Identifizierung der Frequenzkomponenten und der entsprechenden Amplituden von Umweltprozessen. Da die zu diesem Zweck benötigten realen Datensätze häufig mit aleatorischen und epistemischen Unsicherheiten behaftet sind und der PSD-Schätzprozess selbst weitere Unsicherheiten induzieren kann, ist eine strenge Quantifizierung dieser Unsicherheiten unerlässlich, da andernfalls ein sehr ungenaues Belastungsmodell erzeugt werden könnte, das zu fehlerhaften Simulationsergebnissen führen kann. Ein eigentlich katastrophales Systemverhalten kann so in einen akzeptablen Bereich verschoben werden, so dass das System als sicher eingestuft wird, obwohl es einem hohen Risiko der Beschädigung oder des Zusammenbruchs ausgesetzt ist. Um diese Probleme anzugehen, werden alternative Belastungsmodelle vorgeschlagen, die probabilistische und nicht-deterministische Modelle verwenden, welche in der Lage sind, diese Unsicherheiten effizient zu berücksichtigen und die Belastungen entsprechend zu modellieren. Bei der Erstellung dieser Lastmodelle werden verschiedene Methoden verwendet, die insbesondere nach dem Charakter der Daten und der Anzahl der verfügbaren Datensätze ausgewählt werden. Wenn mehrere Datensätze verfügbar sind, können zuverlässige statistische Informationen aus einer Reihe ähnlicher PSD-Funktionen extrahiert werden, die sich z.B. nur geringfügig in Form und Spitzenfrequenz unterscheiden. Auf der Grundlage dieser Statistiken wird ein Modell der PSD-Funktion abgeleitet, das subjektive Wahrscheinlichkeiten verwendet, um die epistemischen Unsicherheiten zu erfassen und diese Informationen effektiv darzustellen. Die spektralen Leistungsdichten werden als Zufallsvariablen charakterisiert, anstatt diskrete Werte zu verwenden, somit stellt die PSD-Funktion selbst einen nicht-stationären Zufallsprozess dar, der einen Bereich möglicher gültiger PSD-Funktionen für einen gegebenen Datensatz umfasst. Wenn nur eine begrenzte Anzahl von Datensätzen zur Verfügung steht, ist es nicht möglich, solche zuverlässigen statistischen Informationen abzuleiten. Daher wird ein intervallbasierter Ansatz vorgeschlagen, der nur eine obere und untere Grenze bestimmt und sich nicht auf eine Verteilung innerhalb dieser Grenzen stützt. Ein Satz von diskret wertigen PSD-Funktionen wird in eine intervallwertige PSD-Funktion umgewandelt, indem die Gewichte von vorab abgeleiteten Basisfunktionen aus einem Radialbasisfunktionsnetz so optimiert werden, dass sie eine obere und untere Grenze bilden, die den Datensatz umfassen. Damit wird ein Bereich möglicher Werte und Systemreaktionen anstelle diskreter Werte ermittelt, welche in der Lage sind, epistemische Unsicherheiten zu erfassen. Bei der Erstellung eines solchen Lastmodells aus realen Datensätzen kann das Problem auftreten, dass die einzelnen Datensätze eine hohe spektrale Varianz im Frequenzbereich aufweisen und sich daher zu stark voneinander unterscheiden, obwohl sie im Zeitbereich ähnlich erscheinen. Ein aus diesen Daten abgeleitetes Lastmodell deckt möglicherweise nicht den gesamten Spektralbereich ab und ist daher nicht repräsentativ. Die Daten werden daher mit Hilfe der Bhattacharyya-Distanz und des k-means-Algorithmus nach ihrer Ähnlichkeit gruppiert, wodurch zwei oder mehr Belastungsmodelle aus dem gesamten Datensatz erzeugt werden können. Diese können separat auf die zu untersuchende Struktur angewandt werden, was zu genaueren Simulationsergebnissen führt. Dieser Ansatz kann auch zur Schätzung der spektralen Ähnlichkeit einzelner Datensätze im Frequenzbereich verwendet werden, was für die oben genannten Lastmodelle besonders relevant ist. Wenn die Unsicherheiten direkt im Zeitsignal modelliert werden, kann es eine schwierige Aufgabe sein, sie effizient in den Frequenzbereich zu transformieren. Ein solches Signal kann möglicherweise nur aus zuverlässigen Grenzen bestehen, in denen das tatsächliche Signal liegt. Es wird eine Methode vorgestellt, mit der diese Intervallunsicherheit automatisch durch die diskrete Fourier Transformation propagiert werden kann, um die exakten Grenzen der Fourier-Amplitude und der Schätzung der PSD-Funktion zu erhalten. Die Methode ermöglicht es, ein solches Intervallsignal zu propagieren, ohne Annahmen über die Abhängigkeit und Verteilung des Fehlers über die Zeitschritte zu treffen. Diese neuartigen Darstellungen von Lastmodellen sind in der Lage, epistemische Unsicherheiten zu quantifizieren, die in realen Datensätzen enthalten sind und durch den PSD-Schätzprozess induziert werden. Die Stärken und Vorteile dieser Ansätze in der Praxis werden anhand mehrerer numerischer Beispiele aus dem Bereich der stochastischen Dynamik demonstriert

    Aleatoric Uncertainty Modelling in Regression Problems using Deep Learning

    Get PDF
    [eng] Nowadays, we live in an intrinsically uncertain world from our perspective. We do not know what will happen in the future but, to infer it, we build the so-called models. These models are abstractions of the world we live which allow us to conceive how the world works and that are, essentially, validated from our previous experience and discarded if their predictions prove to be incorrect in the future. This common scientific process of inference has several non-deterministic steps. First of all, our measuring instruments could be inaccurate. That is, the information we use a priori to know what will happen may already contain some irreducible error. Besides, our past experience in building the model could be biased (and, therefore, we would incorrectly infer the future, as the models would be based on unrepresentative data). On the other hand, our model itself may be an oversimplification of the reality (which would lead us to unrealistic generalizations). Furthermore, the overall task of inferring the future may be downright non-deterministic. This often happens when the information we have a priori to infer the future is incomplete or partial for the task to be performed (i.e. it depends on factors we cannot observe at the time of prediction) and we are, consequently, obliged to consider that what we want to predict is not a deterministic value. One way to model all of these uncertainties is through a probabilistic approach that mathematically formalizes these sources of uncertainty in order to create specific methods that capture them. Accordingly, the general aim of this thesis is to define a probabilistic approach that contributes to artificial intelligence-based systems (specifically, deep learning) becoming robust and reliable systems capable of being applied to high-risk problems, where having generic good performance is not enough but also to ensure that critical errors with high costs are avoided. In particular, the thesis shows the current divergence in the literature - when it comes to dividing and naming the different types of uncertainty - by proposing a procedure to follow. In addition, based on a real problem case arising from the industrial nature of the current thesis, the importance of investigating the last type of uncertainty is emphasized, which arises from the lack of a priori information in order to infer deterministically the future, the so-called aleatoric uncertainty. The current thesis delves into different literature models in order to capture aleatoric uncertainty using deep learning and analyzes their limitations. In addition, it proposes new state-of-the-art approaches that allow to solve the limitations exposed during the thesis. As a result of applying the aleatoric uncertainty modelling in real-world problems, the uncertainty modelling of a black box systems problem arises. Generically, a Black box system is a pre-existing predictive system which originally do not model uncertainty and where no requirements or assumptions are made about its internals. Therefore, the goal is to build a new system that wrappers the black box and models the uncertainty of this original system. In this scenario, not all previously introduced aleatoric uncertainty modelling approaches can be considered and this implies that flexible methods such as Quantile Regression ones need to be modified in order to be applied in this context. Subsequently, the Quantile Regression study brings the need to solve one critical literature problem in the QR literature, the so-called crossing quantile, which motivates the proposal of new additional models to solve it. Finally, all of the above research will be summarized in visualization and evaluation methods for the predicted uncertainty to produce uncertainty-tailored methods.[cat] Estem rodejats d’incertesa. Cada decisió que prenem té una probabilitat de sortir com un espera i, en funció d’aquesta, molts cops condicionem les nostres decisions. De la mateixa manera, els sistemes autònoms han de saber interpretar aquests escenaris incerts. Tot i això, actualment, malgrat els grans avenços en el camp de la intel·ligència artificial, ens trobem en un moment on la incapacitat d'aquests sistemes per poder identificar a priori un escenari de major risc impedeix la seva inclusió com a part de solucions que podrien revolucionar la societat tal i com la coneixem. El repte és significatiu i, per això, és essencial que aquests sistemes aprenguin a modelar i gestionar totes les fonts de la incertesa. Partint d'un enfocament probabilístic, aquesta tesi proposa formalitzar els diferents tipus d'incerteses i, en particular, centra la seva recerca en un tipus anomenada com incertesa aleatòrica, ja que va ser detectada com la principal incertesa decisiva a tractar en el problema financer original que va motivar el present doctorat industrial. A partir d'aquesta investigació, la tesi proposa nous models per millorar l'estat de l'art en la modelització de la incertesa aleatòrica, així com introdueix un nou problema, a partir d’una necessitat real industrial, que apareix quan hi ha un sistema predictiu en producció que no modela la incertesa i es vol modelar la incertesa a posteriori de forma independent. Aquest problema es denotarà com la modelització de la incertesa d'un sistema de caixa negra i motivarà la proposta de nous models especialitzats en mantenir els avantatges predictius, com ara la Regressió Quantílica (RQ), adaptant-los al problema de la caixa negra. Posteriorment, la investigació en RQ motivarà la proposta de nous models per resoldre un problema fonamental de la literatura en RQ conegut com el fenomen del creuament de quantils, que apareix quan, a l’hora de predir simultàniament diferents quantils, l’ordre entre quantils no es conserva. Finalment, tota la investigació anterior es resumirà en mètodes de visualització i avaluació de la incertesa reportada per tal de produir mètodes que mitjançant aquesta informació extra prenguin decisions més robustes

    Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Get PDF

    PV Charging and Storage for Electric Vehicles

    Get PDF
    Electric vehicles are only ‘green’ as long as the source of electricity is ‘green’ as well. At the same time, renewable power production suffers from diurnal and seasonal variations, creating the need for energy storage technology. Moreover, overloading and voltage problems are expected in the distributed network due to the high penetration of distributed generation and increased power demand from the charging of electric vehicles. The energy and mobility transition hence calls for novel technological innovations in the field of sustainable electric mobility powered from renewable energy. This Special Issue focuses on recent advances in technology for PV charging and storage for electric vehicles
    corecore