138 research outputs found

    Final solution to the problem of relating a true copula to an imprecise copula

    Full text link
    In this paper we solve in the negative the problem proposed in this journal (I. Montes et al., Sklar's theorem in an imprecise setting, Fuzzy Sets and Systems, 278 (2015), 48-66) whether an order interval defined by an imprecise copula contains a copula. Namely, if C\mathcal{C} is a nonempty set of copulas, then C=inf{C}CC\underline{C} = \inf\{C\}_{C\in\mathcal{C}} and C=sup{C}CC\overline{C}= \sup\{C\}_{C\in\mathcal{C}} are quasi-copulas and the pair (C,C)(\underline{C},\overline{C}) is an imprecise copula according to the definition introduced in the cited paper, following the ideas of pp-boxes. We show that there is an imprecise copula (A,B)(A,B) in this sense such that there is no copula CC whatsoever satisfying ACBA \leqslant C\leqslant B. So, it is questionable whether the proposed definition of the imprecise copula is in accordance with the intentions of the initiators. Our methods may be of independent interest: We upgrade the ideas of Dibala et al. (Defects and transformations of quasi-copulas, Kybernetika, 52 (2016), 848-865) where possibly negative volumes of quasi-copulas as defects from being copulas were studied.Comment: 20 pages; added Conclusion, added some clarifications in proofs, added some explanations at the beginning of each section, corrected typos, results remain the sam

    Quasi-random numbers for copula models

    Full text link
    The present work addresses the question how sampling algorithms for commonly applied copula models can be adapted to account for quasi-random numbers. Besides sampling methods such as the conditional distribution method (based on a one-to-one transformation), it is also shown that typically faster sampling methods (based on stochastic representations) can be used to improve upon classical Monte Carlo methods when pseudo-random number generators are replaced by quasi-random number generators. This opens the door to quasi-random numbers for models well beyond independent margins or the multivariate normal distribution. Detailed examples (in the context of finance and insurance), illustrations and simulations are given and software has been developed and provided in the R packages copula and qrng

    A full scale Sklar's theorem in the imprecise setting

    Full text link
    In this paper we present a surprisingly general extension of the main result of a paper that appeared in this journal: I. Montes et al., Sklar's theorem in an imprecise setting, Fuzzy Sets and Systems, 278 (2015), 48--66. The main tools we develop in order to do so are: (1) a theory on quasi-distributions based on an idea presented in a paper by R. Nelsen with collaborators; (2) starting from what is called (bivariate) pp-box in the above mentioned paper we propose some new techniques based on what we call restricted (bivariate) pp-box; and (3) a substantial extension of a theory on coherent imprecise copulas developed by M. Omladi\v{c} and N. Stopar in a previous paper in order to handle coherence of restricted (bivariate) pp-boxes. A side result of ours of possibly even greater importance is the following: Every bivariate distribution whether obtained on a usual σ\sigma-additive probability space or on an additive space can be obtained as a copula of its margins meaning that its possible extraordinariness depends solely on its margins. This might indicate that copulas are a stronger probability concept than probability itself.Comment: 16 pages, minor change

    A Computational Framework for Efficient Reliability Analysis of Complex Networks

    Get PDF
    With the growing scale and complexity of modern infrastructure networks comes the challenge of developing efficient and dependable methods for analysing their reliability. Special attention must be given to potential network interdependencies as disregarding these can lead to catastrophic failures. Furthermore, it is of paramount importance to properly treat all uncertainties. The survival signature is a recent development built to effectively analyse complex networks that far exceeds standard techniques in several important areas. Its most distinguishing feature is the complete separation of system structure from probabilistic information. Because of this, it is possible to take into account a variety of component failure phenomena such as dependencies, common causes of failure, and imprecise probabilities without reevaluating the network structure. This cumulative dissertation presents several key improvements to the survival signature ecosystem focused on the structural evaluation of the system as well as the modelling of component failures. A new method is presented in which (inter)-dependencies between components and networks are modelled using vine copulas. Furthermore, aleatory and epistemic uncertainties are included by applying probability boxes and imprecise copulas. By leveraging the large number of available copula families it is possible to account for varying dependent effects. The graph-based design of vine copulas synergizes well with the typical descriptions of network topologies. The proposed method is tested on a challenging scenario using the IEEE reliability test system, demonstrating its usefulness and emphasizing the ability to represent complicated scenarios with a range of dependent failure modes. The numerical effort required to analytically compute the survival signature is prohibitive for large complex systems. This work presents two methods for the approximation of the survival signature. In the first approach system configurations of low interest are excluded using percolation theory, while the remaining parts of the signature are estimated by Monte Carlo simulation. The method is able to accurately approximate the survival signature with very small errors while drastically reducing computational demand. Several simple test systems, as well as two real-world situations, are used to show the accuracy and performance. However, with increasing network size and complexity this technique also reaches its limits. A second method is presented where the numerical demand is further reduced. Here, instead of approximating the whole survival signature only a few strategically selected values are computed using Monte Carlo simulation and used to build a surrogate model based on normalized radial basis functions. The uncertainty resulting from the approximation of the data points is then propagated through an interval predictor model which estimates bounds for the remaining survival signature values. This imprecise model provides bounds on the survival signature and therefore the network reliability. Because a few data points are sufficient to build the interval predictor model it allows for even larger systems to be analysed. With the rising complexity of not just the system but also the individual components themselves comes the need for the components to be modelled as subsystems in a system-of-systems approach. A study is presented, where a previously developed framework for resilience decision-making is adapted to multidimensional scenarios in which the subsystems are represented as survival signatures. The survival signature of the subsystems can be computed ahead of the resilience analysis due to the inherent separation of structural information. This enables efficient analysis in which the failure rates of subsystems for various resilience-enhancing endowments are calculated directly from the survival function without reevaluating the system structure. In addition to the advancements in the field of survival signature, this work also presents a new framework for uncertainty quantification developed as a package in the Julia programming language called UncertaintyQuantification.jl. Julia is a modern high-level dynamic programming language that is ideal for applications such as data analysis and scientific computing. UncertaintyQuantification.jl was built from the ground up to be generalised and versatile while remaining simple to use. The framework is in constant development and its goal is to become a toolbox encompassing state-of-the-art algorithms from all fields of uncertainty quantification and to serve as a valuable tool for both research and industry. UncertaintyQuantification.jl currently includes simulation-based reliability analysis utilising a wide range of sampling schemes, local and global sensitivity analysis, and surrogate modelling methodologies

    Fitting aggregation operators to data

    Full text link
    Theoretical advances in modelling aggregation of information produced a wide range of aggregation operators, applicable to almost every practical problem. The most important classes of aggregation operators include triangular norms, uninorms, generalised means and OWA operators.With such a variety, an important practical problem has emerged: how to fit the parameters/ weights of these families of aggregation operators to observed data? How to estimate quantitatively whether a given class of operators is suitable as a model in a given practical setting? Aggregation operators are rather special classes of functions, and thus they require specialised regression techniques, which would enforce important theoretical properties, like commutativity or associativity. My presentation will address this issue in detail, and will discuss various regression methods applicable specifically to t-norms, uninorms and generalised means. I will also demonstrate software implementing these regression techniques, which would allow practitioners to paste their data and obtain optimal parameters of the chosen family of operators.<br /

    Essays on Capital Calculation in Insurance

    Get PDF
    In order to be able to bear the risk they are taking, insurance companies have to set aside a certain amount of cushion that can guarantee the payment of liabilities, up to a dened probability, and thus to remain solvent in case of bad events. This amount is named capital. The calculation of capital is a complex problem. To be sustainable, capital must consider all possible risk sources that may lead to losses among assets and liabilities of the insurance company, and it must account for the likelihood and the eect of these bad (and usually extreme) events that could occur to the risk sources. Insurance companies build models and tools in order to perform this capital calculation. For that, they have to collect data, build statistical evidence, build mathematical models and tools in order to eciently and accurately derive capital. The papers exposed in this thesis deal with three major diculties. First, the uncertainty behind the choice of a specic model and the quantication of this uncertainty in terms of additional capital. The use of external scenarios, i.e. opinions on the likelihood of some events happening, allows to build a coherent methodology that make the cushion more robust against wrong model specication. Second, the computational complexity in using these models in an industrialized environment, and numerical methods available for increasing their computational eciency. Most of these models cannot provide an analytical formula of capital. Consequently, one has to approximate it via simulation methods. Considering the high number of risk sources and the complexity of insurance contracts, these methods can be slow to run before providing a reasonable accuracy. This often makes these models unusable in practical cases. Enhancements of classical simulation methods are presented in the aim of making these approximations faster to run for the same level of accuracy. Third, the lack of reliable data and the high complexity of problems with long time horizons, and statistical methods for identifying and building reliable proxies in such cases. A typical example is life-insurance contracts that imply being exposed to multiple risks sources over a long horizon. Such contracts can in fact be approximated wisely by proxies that can capture the risk over time

    Graph Analytics For Smart Manufacturing

    Get PDF
    Emergence in the high-resolution sensing and imaging technologies have allowed us to track the variability in manufacturing processes occurring at every conceivable resolution of interest. However, representation of the underlying manufacturing processes using streaming sensor data remains a challenge. Efficient representations are critical for enabling real-time monitoring and quality assurance in smart manufacturing. Towards this, we present graph-based methods for efficient representation of the image data gathered from advanced manufacturing processes. In this dissertation, we first focus on experimental studies involving the finishing of complex additively manufactured components and discuss the important phenomenological details of the polishing process. Our experimental studies point to a material redistribution theory of polishing where material flows in the form of thin fluid like layers, eventually bridging up the neighboring asperities. Subsequently, we use the physics of the process gathered from this study to develop a random planar graph approach to represent the evolution of the surface morphology as gathered from electron microscopic images during mechanical polishing. In the next half of the dissertation, we focus on unsupervised image segmentation using graph cuts by iteratively estimating the image labels by solving the max-flow problem while optimally estimating the tuning parameters using maximum a posteriori estimation. We also establish the consistency of the posterior estimates. Applications of the method in benchmark and manufacturing case studies show more than 90% improvement in the segmentation performance as compared to state-of-the-art unsupervised methods. While the characterization of the advanced manufacturing processes using image and sensor data is increasingly sought after, it is equally important to perform characterization rapidly. The last chapter of this dissertation is set to focus on the rapid characterization of the salient microstructural phases present on a metallic workpiece surface via a nanoindentation-based lithography process. A summary of the contributions and directions of future work are also presented

    Untangling hotel industry’s inefficiency: An SFA approach applied to a renowned Portuguese hotel chain

    Get PDF
    The present paper explores the technical efficiency of four hotels from Teixeira Duarte Group - a renowned Portuguese hotel chain. An efficiency ranking is established from these four hotel units located in Portugal using Stochastic Frontier Analysis. This methodology allows to discriminate between measurement error and systematic inefficiencies in the estimation process enabling to investigate the main inefficiency causes. Several suggestions concerning efficiency improvement are undertaken for each hotel studied.info:eu-repo/semantics/publishedVersio

    Pragudega elastsete astmeliste plaatide omavõnkumised

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsioone.Konstruktsioonide seisukorra hindamiseks kasutatakse mittepurustavaid meetodeid, mis võimaldavad konstruktsioonide defekte tuvastada ilma neid purustamata. Mittepurustavaid meetodeid on mitmeid, üks neist on omavõnke meetod. See meetod põhineb konstruktsiooni omadusel iseseisvalt võnkuda teatud sagedustel. Neid sagedusi nimetatakse omavõnkesagedusteks. konstruktsioonide omavõnkesageduste teoreetiline uurimine ongi käesoleva dissertatsiooni põhiteemaks. Antud dissertatsioonis vaadeldakse kahte tüüpi astmeliste konstruktsioonide omavõnkumisi. Vaatluse all on, kuidas praod mõjutavad nende konstruktsioonide omavõnkesagedusi. Praod on stabiilsed, konstantse pikkusega ja asuvad konstruktsiooni astme kohtades. Antud töö võiks tinglikult jagada kolme osasse. Esimeses osas vaadeldakse pragude mõju isotroopsest materjalist plaadi ribade omavõnkumisele erinevate kinnitustingimuste korral. Teises osas on vaatluse all pragude roll anisotroopsest materjalist plaatide korral. Kolmas osa uurib pragudega anisotroopseid plaate, mis on elastsel alusel. Käesolevas dissertatsioonis arendati välja uus analüütiline meetod, mis baseerub klassikalisel plaatide teooria ning purunemismehaanika võrranditel ja kriteeriumitel. Arendatud meetod on esimene, mis võimaldab korraga modelleerida pragude ja astmetega konstruktsioonide omavõnkumisi.To assess structural durability non-destructive methods are used. These allow to detect defects in structures without destroying them in the process. There are many non-destructive methods, one of them is the eigenfrequency method, which is based on the property of the structures to oscillate freely at certain frequencies. These frequencies are called eigenfrequencies. Theoretical investigation eigenfrequencies of the structures is the main topic of this dissertation. This dissertation examines eigenfrequencies of two types of stepped structures, namely plates and plate strips. The main question is how the cracks affect the eigenfrequencies of these structures. We consider cracks that are stable, have constant length and are located at the re-entrant corners of steps of the structures. This work can be divided into three parts. In the first section the impact of cracks on the eigenfrequencies of isotropic plate strips under different boundary conditions is examined. The second section is devoted to the role of cracks in anisotropic plates. The third section deals with the cracked anisotropic plates resting on elastic foundation. In the dissertation a new analytical method, which combines the classical theory of plates and the equations of fracture mechanics is proposed. The proposed method is the first analytical method that allows us to model eigenfrequencies of cracked and stepped structures at the same time
    corecore