73 research outputs found

    Approximate Bayesian Computation for a Class of Time Series Models

    Full text link
    In the following article we consider approximate Bayesian computation (ABC) for certain classes of time series models. In particular, we focus upon scenarios where the likelihoods of the observations and parameter are intractable, by which we mean that one cannot evaluate the likelihood even up-to a positive unbiased estimate. This paper reviews and develops a class of approximation procedures based upon the idea of ABC, but, specifically maintains the probabilistic structure of the original statistical model. This idea is useful, in that it can facilitate an analysis of the bias of the approximation and the adaptation of established computational methods for parameter inference. Several existing results in the literature are surveyed and novel developments with regards to computation are given

    Mechanical and Modular Verification Condition Generation for Object-Based Software

    Get PDF
    The foundational goal of this work is the development of mechanizable proof rules and a verification condition generator based on those rules for modern software. The verification system will be modular so that it is possible to verify the implementation of a component relying upon only the specifications of underlying components that are reused. The system must enable full behavioral verification. The proof rules used to generate verification conditions (VCs) of correctness must be amenable to automation. While automation requires software developers to annotate implementations with assertions, it should not require assistance in the proofs. This research has led to a VC generator that realizes these goals. The VC generator has been applied to a range of benchmarks to show the viability of verified components. It has been used in classrooms at multiple institutions to teach reasoning principles. A fundamental problem in computing is the inability to show that a software system behaves as required. Modern software systems are composed of numerous software components. The fundamental goal of this work is to verify each independently in a modular fashion, resulting in full behavioral verification and providing an assurance that components meet their specifications and can be used with confidence to build verified software systems. Of course, to be practical, such a system must be mechanical. Although the principles of verification have existed for decades, the basis for a practical verification system for modern software components has remained elusive

    A Process for Extracting Knowledge in Design for the Developing World

    Get PDF
    The aim of this study was to develop the process necessary to identify design knowledge shared across product classes and contexts in Design for the Developing World. A process for extracting design knowledge in the field of Design for the Developing World was developed based on the Knowledge Discovery in Databases framework. This process was applied to extract knowledge from a sample dataset of 48 products and small-scale technologies. Unsupervised cluster analysis revealed two distinct product groups, cluster X-AA and cluster Z-AC-AD. Unique attributes of cluster XX-AA include local manufacture, local maintenance and service, human-power, distribution by a non-governmental organization, income-generation, and application in water/sanitation or agriculture sectors. The label Locally Oriented Design for the Developing World was assigned to this group based on the dominant features represented. Unique attributes of cluster Z-AC-AD include electric-power, distribution by a private organization, and application in the health or energy/communication sectors. The label Globally Oriented Design for the Developing World was assigned to this group. These findings were corroborated by additional analyses that suggest certain design knowledge is shared across classes and contexts within groups of products. The results suggest that at least two of these groups exist, which can serve as an initial framework for organizing the literature related to inter-context and inter-class design knowledge. Design knowledge was extracted from each group by collecting known approaches, principles, and methods from available literature. This knowledge may be applied as design guidance in future work by identifying a product group corresponding to the design scenario and sourcing the related set of knowledge

    3D reconstruction for plastic surgery simulation based on statistical shape models

    Get PDF
    This thesis has been accomplished in Crisalix in collaboration with the Universitat Pompeu Fabra within the program of Doctorats Industrials. Crisalix has the mission of enhancing the communication between professionals of plastic surgery and patients by providing a solution to the most common question during the surgery planning process of ``How will I look after the surgery?''. The solution proposed by Crisalix is based in 3D imaging technology. This technology generates the 3D reconstruction that accurately represents the area of the patient that is going to be operated. This is followed by the possibility of creating multiple simulations of the plastic procedure, which results in the representation of the possible outcomes of the surgery. This thesis presents a framework capable to reconstruct 3D shapes of faces and breasts of plastic surgery patients from 2D images and 3D scans. The 3D reconstruction of an object is a challenging problem with many inherent ambiguities. Statistical model based methods are a powerful approach to overcome some of these ambiguities. We follow the intuition of maximizing the use of available prior information by introducing it into statistical model based methods to enhance their properties. First, we explore Active Shape Models (ASM) which are a well known method to perform 2D shapes alignment. However, it is challenging to maintain prior information (e.g. small set of given landmarks) unchanged once the statistical model constraints are applied. We propose a new weighted regularized projection into the parameter space which allows us to obtain shapes that at the same time fulfill the imposed shape constraints and are plausible according to the statistical model. Second, we extend this methodology to be applied to 3D Morphable Models (3DMM), which are a widespread method to perform 3D reconstruction. However, existing methods present some limitations. Some of them are based in non-linear optimizations computationally expensive that can get stuck in local minima. Another limitation is that not all the methods provide enough resolution to represent accurately the anatomy details needed for this application. Given the medical use of the application, the accuracy and robustness of the method, are important factors to take into consideration. We show how 3DMM initialization and 3DMM fitting can be improved using our weighted regularized projection. Finally, we present a framework capable to reconstruct 3D shapes of plastic surgery patients from two possible inputs: 2D images and 3D scans. Our method is used in different stages of the 3D reconstruction pipeline: shape alignment; 3DMM initialization and 3DMM fitting. The developed methods have been integrated in the production environment of Crisalix, proving their validity.Aquesta tesi ha estat realitzada a Crisalix amb la col·laboració de la Universitat Pompeu Fabra sota el pla de Doctorats Industrials. Crisalix té com a objectiu la millora de la comunicació entre els professionals de la cirurgia plàstica i els pacients, proporcionant una solució a la pregunta que sorgeix més freqüentment durant el procés de planificació d'una operació quirúrgica ``Com em veuré després de la cirurgia?''. La solució proposada per Crisalix està basada en la tecnologia d'imatge 3D. Aquesta tecnologia genera la reconstrucció 3D de la zona del pacient operada, seguit de la possibilitat de crear múltiples simulacions obtenint la representació dels possibles resultats de la cirurgia. Aquesta tesi presenta un sistema capaç de reconstruir cares i pits de pacients de cirurgia plàstica a partir de fotos 2D i escanegis. La reconstrucció en 3D d'un objecte és un problema complicat degut a la presència d'ambigüitats. Els mètodes basats en models estadístics son adequats per mitigar-les. En aquest treball, hem seguit la intuïció de maximitzar l'ús d'informació prèvia, introduint-la al model estadístic per millorar les seves propietats. En primer lloc, explorem els Active Shape Models (ASM) que són un conegut mètode fet servir per alinear contorns d'objectes 2D. No obstant, un cop aplicades les correccions de forma del model estadístic, es difícil de mantenir informació de la que es disposava a priori (per exemple, un petit conjunt de punts donat) inalterada. Proposem una nova projecció ponderada amb un terme de regularització, que permet obtenir formes que compleixen les restriccions de forma imposades i alhora són plausibles en concordança amb el model estadístic. En segon lloc, ampliem la metodologia per aplicar-la als anomenats 3D Morphable Models (3DMM) que són un mètode extensivament utilitzat per fer reconstrucció 3D. No obstant, els mètodes de 3DMM existents presenten algunes limitacions. Alguns estan basats en optimitzacions no lineals, computacionalment costoses i que poden quedar atrapades en mínims locals. Una altra limitació, és que no tots el mètodes proporcionen la resolució adequada per representar amb precisió els detalls de l'anatomia. Donat l'ús mèdic de l'aplicació, la precisió i la robustesa són factors molt importants a tenir en compte. Mostrem com la inicialització i l'ajustament de 3DMM poden ser millorats fent servir la projecció ponderada amb regularització proposada. Finalment, es presenta un sistema capaç de reconstruir models 3D de pacients de cirurgia plàstica a partir de dos possibles tipus de dades: imatges 2D i escaneigs en 3D. El nostre mètode es fa servir en diverses etapes del procés de reconstrucció: alineament de formes en imatge, la inicialització i l'ajustament de 3DMM. Els mètodes desenvolupats han estat integrats a l'entorn de producció de Crisalix provant la seva validesa

    The Quixote project: Collaborative and Open Quantum Chemistry data management in the Internet age.

    Get PDF
    Computational Quantum Chemistry has developed into a powerful, efficient, reliable and increasingly routine tool for exploring the structure and properties of small to medium sized molecules. Many thousands of calculations are performed every day, some offering results which approach experimental accuracy. However, in contrast to other disciplines, such as crystallography, or bioinformatics, where standard formats and well-known, unified databases exist, this QC data is generally destined to remain locally held in files which are not designed to be machine-readable. Only a very small subset of these results will become accessible to the wider community through publication.In this paper we describe how the Quixote Project is developing the infrastructure required to convert output from a number of different molecular quantum chemistry packages to a common semantically rich, machine-readable format and to build respositories of QC results. Such an infrastructure offers benefits at many levels. The standardised representation of the results will facilitate software interoperability, for example making it easier for analysis tools to take data from different QC packages, and will also help with archival and deposition of results. The repository infrastructure, which is lightweight and built using Open software components, can be implemented at individual researcher, project, organisation or community level, offering the exciting possibility that in future many of these QC results can be made publically available, to be searched and interpreted just as crystallography and bioinformatics results are today.Although we believe that quantum chemists will appreciate the contribution the Quixote infrastructure can make to the organisation and and exchange of their results, we anticipate that greater rewards will come from enabling their results to be consumed by a wider community. As the respositories grow they will become a valuable source of chemical data for use by other disciplines in both research and education.The Quixote project is unconventional in that the infrastructure is being implemented in advance of a full definition of the data model which will eventually underpin it. We believe that a working system which offers real value to researchers based on tools and shared, searchable repositories will encourage early participation from a broader community, including both producers and consumers of data. In the early stages, searching and indexing can be performed on the chemical subject of the calculations, and well defined calculation meta-data. The process of defining more specific quantum chemical definitions, adding them to dictionaries and extracting them consistently from the results of the various software packages can then proceed in an incremental manner, adding additional value at each stage.Not only will these results help to change the data management model in the field of Quantum Chemistry, but the methodology can be applied to other pressing problems related to data in computational and experimental science.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are

    Developing An Object-oriented Approach For Operations Simulation In Speedes

    Get PDF
    Using simulation techniques, performance of any proposed system can be tested for different scenarios with a generated model. However, it is difficult to rapidly create simulation models that will accurately represent the complexity of the system. In recent years, Object-Oriented Discrete-Event Simulation has emerged as the potential technology to implement rapid simulation schemes. A number of software based on programming languages like C++ and Java are available for carrying out Object Oriented Discrete-Event Simulation. These software packages establish a general framework for simulation in computer programs, but need to be further customized for desired end-use applications. In this thesis, a generic simulation library is created for the distributed Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES). This library offers classes to model the functionality of servers, processes, resources, transporters, and decisions. The library is expected to produce efficient simulation models in less time and with a lesser amount of coding. The class hierarchy is modeled using the Unified Modeling Language (UML). To test the library, the existing SPEEDES Space Shuttle Model is enhanced and recreated. This enhanced model is successfully validated against the original Arena model

    RGtk2: A Graphical User Interface Toolkit for R

    Get PDF
    Graphical user interfaces (GUIs) are growing in popularity as a complement or alternative to the traditional command line interfaces to R. RGtk2 is an R package for creating GUIs in R. The package provides programmatic access to GTK+ 2.0, an open-source GUI toolkit written in C. To construct a GUI, the R programmer calls RGtk2 functions that map to functions in the underlying GTK+ library. This paper introduces the basic concepts underlying GTK+ and explains how to use RGtk2 to construct GUIs from R. The tutorial is based on simple and pratical programming examples. We also provide more complex examples illustrating the advanced features of the package. The design of the RGtk2 API and the low-level interface from R to GTK+ are discussed at length. We compare RGtk2 to alternative GUI toolkits for R.

    Structural analysis of microsatellites

    Get PDF
    Satellite design, development, fabrication, testing and entry into service is a complex process. Each step of this process involves intricate steps to achieve the desired objective. This thesis summarizes a study relating to the area of development and testing of microsatellites to support qualification and eventually preparing a spacecraft for spaceflight. Students in the Space Systems Engineering laboratory (SSE Lab) in the Aerospace Engineering Program are in the process of developing a pair of microsatellites for a technology demonstration in space. After the initial design of the spacecraft is completed in the design phase a significant amount of time is spent on gaining confidence in the design. Various mathematical models are developed to represent the system and to verify its functionality. In the case of the primary structure of microsatellite a finite element model (FEM) is used to predict the behavior of the satellite structure and to verify strength requirements of design before its fabrication. Finite element model its application and results obtained form the majority of this thesis after which concentration is given to the testing phase of the microsatellite. After gaining confidence in the design and fabrication of the components it is important to validate the structure by subjecting it to structural testing. Structural testing is the only means to gain confidence in the design and certifying it for spaceflight. The results obtained from testing show how closely mathematical model (FEM) represents the physical system and provides an important learning experience for the satellite team and to help better understand and improve the design of the next generation of satellites on campus --Abstract, page iii

    Genominlaajuisten Assosiaationtutkimusten Tulosten Automaattinen Raportointi ja Vertaus

    Get PDF
    In recent years, genome-wide association studies (GWAS) have grown both in size and scope, with sample sizes growing to hundreds of thousands of samples and the focus of the efforts shifting to the amassing of phenome-wide, population-level data resources. These studies have brought with them an unprecedented amount of associations between genomic regions and phenotypic traits. Recently, the FinnGen project was started to create a population-level, phenome-wide GWAS recource of the Finnish population. The large amount of result data created by the FinnGen project creates a need for an automatic process of extracting significant results from the result data. This thesis describes the automatic reporting tool, which was created for the needs of the FinnGen project. The tool extracts and annotates significant results from GWAS summary statistics and compares them to previously identified associations. The tool's motivation and function is described. A data analysis pipeline was created for the tool, and it was tested using a set of GWAS summary statistics. The results come in the form of identified signals per phenotype, as well as information about the novelty of the signals.The results of the experiment show the tool scales to the sizes necessary for the FinnGen project.Viimeaikaiset edistysaskeleet geenitutkimuksessa ovat mahdollistaneet genominlaajuisten assosiaatiotutkimusten (eng. genome-wide association study, GWAS) kasvamisen niin koossa kuin laajuudessa. Tutkimusten otoskoot ovat kasvaneet satoihin tuhansiin ja tutkimusten pääpaino on siirtynyt kohti koko fenotyyppikirjon sisältäviä, populaatiokohtaisia aineistoja. Näiden aineistojen ja niistä tehtyjen tutkimusten ansiosta genomin ja fyysisten ominaisuuksien välisten assosiaatioiden määrä on räjähtänyt. Vuonna 2017 alkanut FinnGen-projekti tähtää Suomen populaation kattavaan, koko suomalaisen tautikirjon sisältävään aineistoon. Valtavan datamäärän käsittelemiseksi työkalulle, joka erottelisi merkittävät tulokset projektin tuloksista, on syntynyt tarve. Tämä diplomityö esittelee genominlaajuisten assosiaatiotutkimusten automaattisen raportointityökalun, joka luotiin FinnGen-projektin tarpeisiin. Raportointityökalu eristää merkittävät variantit GWAS-tiivistelmätilastoista, lisää niihin tunnetut geeniannotaatiot ja vertaa niitä jo löydettyihin assosiaatioihin. Diplomityössä kuvataan sekä työkalun tarkoitus että sen toiminta. Työkalun käyttämiseksi FinnGen-projektissa sille luotiin WDL-kieleen pohjautuva työnkulkuspesifikaatio, jota testattiin suorittamalla työkalun työnkulku joukolle GWAS-tiivistelmätilastoja. Työkalu tuottaa lopputuloksenaan joukon assosiaatiosignaaleja jokaiselle tiivistelmätilastolle. Näihin signaaleihin on lisätty tieto siitä, mitkä niistä on assosioitu aikaisemmin, ja mitkä ovat uusia assosiaatioita. Työkalun testauksen tulokset osoittavat, että työkalua voidaan käyttää myös FinnGen-projektin tarpeisiin
    corecore