188 research outputs found

    Pate de verre process

    Get PDF
    The intent of this thesis is to present a concise summary of the basic processes involved in the pate de verre method of casting glass. The pate de verre process allows one to cast practically any conceivable form into a glass object. Prior to beginning, however, one should give same consideration to the entire process its possibilities and its limitations. Beginning with making glass frit, the techniques covered include experimentation an combining glass frit with ceramic frits to achieve a variety of surfaces not normally available with one glass formula. The ingredients of the plaster mold used to contain the glass frit as it is fired will change based on different properties of each constituent and the effects each has in combination with the other materials and on the glass. Individual personal style and experimentation are the best determinant of which plaster formula and method of casting the mold will produce a superior glass casting. The final stage in the pate de verre process is firing the mold which has been packed with frit to a high enough temperature to melt the glass. The kiln is held for a period of time at the high temperature to allow the frit to fuse completely, flow into and fill the mold. 2 The pate de verre process is flexible, allowing one to sculpt glass into forms not normally achieved with other glass-working processes. The glass casting, once fired, takes en characteristics that evoke other media marble, quartz, stone quite unlike our normal associations with glass. Yet the piece retains that quality of glass I find most fascinating and alluring the ability to capture the light that passes into the glass body and reflect it back out. It is these characteristics the enduring appearance of material and the hint of life and glow from within that enable my sculptural work to begin to speak about life and its struggles, our fragility and our strength

    Partially Exchangeable Networks and Architectures for Learning Summary Statistics in Approximate Bayesian Computation

    Get PDF
    We present a novel family of deep neural architectures, named partially exchangeable networks (PENs) that leverage probabilistic symmetries. By design, PENs are invariant to block-switch transformations, which characterize the partial exchangeability properties of conditionally Markovian processes. Moreover, we show that any block-switch invariant function has a PEN-like representation. The DeepSets architecture is a special case of PEN and we can therefore also target fully exchangeable data. We employ PENs to learn summary statistics in approximate Bayesian computation (ABC). When comparing PENs to previous deep learning methods for learning summary statistics, our results are highly competitive, both considering time series and static models. Indeed, PENs provide more reliable posterior samples even when using less training data.Comment: Forthcoming on the Proceedings of ICML 2019. New comparisons with several different networks. We now use the Wasserstein distance to produce comparisons. Code available on GitHub. 16 pages, 5 figures, 21 table

    Corporate governance and its impact on financial performance of public family-lowned firms: a study of market capitalization

    Get PDF
    Corporate Governance strongly contributes to the efficient functioning of the market and corporations, not only by providing the right governance architecture, but also by aligning goals and interests between shareholders and management. Family businesses have been present throughout all of economic history. This work project will be focused on the relationship between family-ownership, firm value, and performance. This is measured separately by using Tobin’s Q and Market Capitalization. The sample will be limited to the European Market, more precisely, the Portuguese and Danish Markets, for which we observe a negative effect of family-ownership on firm value in all models

    missIWAE: Deep Generative Modelling and Imputation of Incomplete Data

    Get PDF
    We consider the problem of handling missing data with deep latent variable models (DLVMs). First, we present a simple technique to train DLVMs when the training set contains missing-at-random data. Our approach, called MIWAE, is based on the importance-weighted autoencoder (IWAE), and maximises a potentially tight lower bound of the log-likelihood of the observed data. Compared to the original IWAE, our algorithm does not induce any additional computational overhead due to the missing data. We also develop Monte Carlo techniques for single and multiple imputation using a DLVM trained on an incomplete data set. We illustrate our approach by training a convolutional DLVM on a static binarisation of MNIST that contains 50% of missing pixels. Leveraging multiple imputation, a convolutional network trained on these incomplete digits has a test performance similar to one trained on complete data. On various continuous and binary data sets, we also show that MIWAE provides accurate single imputations, and is highly competitive with state-of-the-art methods.Comment: A short version of this paper was presented at the 3rd NeurIPS workshop on Bayesian Deep Learnin

    Leveraging the Exact Likelihood of Deep Latent Variable Models

    Get PDF
    Deep latent variable models (DLVMs) combine the approximation abilities of deep neural networks and the statistical foundations of generative models. Variational methods are commonly used for inference; however, the exact likelihood of these models has been largely overlooked. The purpose of this work is to study the general properties of this quantity and to show how they can be leveraged in practice. We focus on important inferential problems that rely on the likelihood: estimation and missing data imputation. First, we investigate maximum likelihood estimation for DLVMs: in particular, we show that most unconstrained models used for continuous data have an unbounded likelihood function. This problematic behaviour is demonstrated to be a source of mode collapse. We also show how to ensure the existence of maximum likelihood estimates, and draw useful connections with nonparametric mixture models. Finally, we describe an algorithm for missing data imputation using the exact conditional likelihood of a deep latent variable model. On several data sets, our algorithm consistently and significantly outperforms the usual imputation scheme used for DLVMs

    The Multivariate Generalised von Mises distribution: Inference and applications

    Get PDF
    Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools to the circular domain. First we introduce a new multivariate distribution over circular variables, called the multivariate Generalised von Mises (mGvM) distribution. This distribution can be constructed by restricting and renormalising a general multivariate Gaussian distribution to the unit hyper-torus. Previously proposed multivariate circular distributions are shown to be special cases of this construction. Second, we introduce a new probabilistic model for circular regression, that is inspired by Gaussian Processes, and a method for probabilistic principal component analysis with circular hidden variables. These models can leverage standard modelling tools (e.g. covariance functions and methods for automatic relevance determination). Third, we show that the posterior distribution in these models is a mGvM distribution which enables development of an efficient variational free-energy scheme for performing approximate inference and approximate maximum-likelihood learning.AKWN thanks CAPES grant BEX 9407-11-1. JF thanks the Danish Council for Independent Research grant 0602- 02909B. RET thanks EPSRC grants EP/L000776/1 and EP/M026957/1

    Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity

    Full text link
    We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages. By training models on sub-sampled datasets in three different languages, we assess the quality of estimates from a wide array of approaches and their dependence on the amount of available data. We find that while approaches based on pre-trained models and ensembles achieve the best results overall, the quality of uncertainty estimates can surprisingly suffer with more data. We also perform a qualitative analysis of uncertainties on sequences, discovering that a model's total uncertainty seems to be influenced to a large degree by its data uncertainty, not model uncertainty. All model implementations are open-sourced in a software package

    Internal-Coordinate Density Modelling of Protein Structure: Covariance Matters

    Full text link
    After the recent ground-breaking advances in protein structure prediction, one of the remaining challenges in protein machine learning is to reliably predict distributions of structural states. Parametric models of fluctuations are difficult to fit due to complex covariance structures between degrees of freedom in the protein chain, often causing models to either violate local or global structural constraints. In this paper, we present a new strategy for modelling protein densities in internal coordinates, which uses constraints in 3D space to induce covariance structure between the internal degrees of freedom. We illustrate the potential of the procedure by constructing a variational autoencoder with full covariance output induced by the constraints implied by the conditional mean in 3D, and demonstrate that our approach makes it possible to scale density models of internal coordinates to full protein backbones in two settings: 1) a unimodal setting for proteins exhibiting small fluctuations and limited amounts of available data, and 2) a multimodal setting for larger conformational changes in a high data regime.Comment: Pages: 10 main, 3 references, 8 appendix. Figures: 5 main, 6 appendi
    • …
    corecore