9,692 research outputs found

    TMB: Automatic Differentiation and Laplace Approximation

    Get PDF
    TMB is an open source R package that enables quick implementation of complex nonlinear random effect (latent variable) models in a manner similar to the established AD Model Builder package (ADMB, admb-project.org). In addition, it offers easy access to parallel computations. The user defines the joint likelihood for the data and the random effects as a C++ template function, while all the other operations are done in R; e.g., reading in the data. The package evaluates and maximizes the Laplace approximation of the marginal likelihood where the random effects are automatically integrated out. This approximation, and its derivatives, are obtained using automatic differentiation (up to order three) of the joint likelihood. The computations are designed to be fast for problems with many random effects (~10^6) and parameters (~10^3). Computation times using ADMB and TMB are compared on a suite of examples ranging from simple models to large spatial models where the random effects are a Gaussian random field. Speedups ranging from 1.5 to about 100 are obtained with increasing gains for large problems. The package and examples are available at http://tmb-project.org

    Software Tool for Validation of Chromatographic Analytical Method

    Get PDF
    Tänapäeval toetuvad paljud valdkonnad erinevate ainete analüüsimiseks analüütilistele protseduuridele. Meditsiinivaldkonnas kasutatakse neid laborianalüüside tegemiseks. Farmaatsias kasutatakse neid ravimite aktiivsete komponentide ja nende koguste määramiseks ning defektide tuvastamiseks. Toitainetööstuses määratakse nende abil toitude ja nende koostisosade omadused. Analüütilist protseduuri võib vaadelda keemilise analüüsi algoritmina. Nende protseduuride suure populaarsuse tõttu on vajalik saada neid valideerida. Valideerimine tõestab, et analüütilise protseduuri poolt kirjeldatav keemiline analüüs on antud otstarbe jaoks mõistlik: et sellega saab vajalikku ühendit piisava täpsusega mõõta. Kahjuks teostatakse protseduuride valideerimine tänapäeval käsitsi analüütiliste keemikute poolt. Käsitsi valideerides võtab aga see palju aega ja on kerge teha vigu. Seega on vajalik analüütiliste keemikute töö hõlbustamiseks luua süsteeme, mis kindlustaks tulemuse korrektsust ja teeks kogu protsessi kergemaks. Tartu Ülikooli keemia instituut on tunnistanud selliste süsteemide vajalikust ja alustas ühe sellise süsteemi - ValChrom’i - arendust. See lõputöö hindab olemasolevate lahenduste tugevaid ja nõrki külgi ning räägib analüütiliste protseduuride valideerimiseks mõeldud veebirakenduse ValChrom implementatsioonist.Many industries rely on analytical procedures to analyze various substances. In the medical field they are used to perform laboratory analyzes. In the pharmaceutical industry they are used to determine and quantify the active component of a drug product as well as impurities. In the food industry they are used to identify the properties of foods and their ingredients. An analytical procedure can be assimilable to the algorithm of a chemical analysis. Due to their widespread use, analytical procedures must be validated. The validation process will prove that the chemical analysis described by the analytical procedure is judicious and fit for its intended use case. That is, the chemical analysis can accurately measure the compound it is supposed to measure. Sadly, that validation process, currently, is performed manually by analytical chemists. The completion of analytical procedure validation manually is tedious and potentially error-prone. Therefore, accessible systems that can assist analytical chemists during analytical procedure validation should be made available to them. These systems will not only ensure the consistency of the result but also alleviate the workload of analytical chemists. The Department of Chemistry of the University of Tartu has acknowledged the need of such systems and launched the implementation of one named ValChrom. This thesis highlights the implementation details of ValChrom – a web-based application for analytical procedure validation, after evaluating the strengths and shortcomings of existing similar software solutions

    Toward Fault-Tolerant Applications on Reconfigurable Systems-on-Chip

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Early component-based reliability assessment using UML based software models

    Get PDF
    In the last decade, software has grown in complexity and size, while development timelines have diminished. As a result, component-based software engineering is becoming routine. Component-based software reliability assessment combines the architecture of the system with the reliability of the components to obtain the system reliability. This allows developers to produce a reliable system and testers to focus on the vulnerable areas.;This thesis discusses a tool developed to implement the methodology previously created for early reliability assessment of component-based systems. The tool, Early Component-based Reliability Assessment (ECRA), uses Rational Rose Unified Modeling Language (UML) diagrams to predict the reliability of component-based software. ECRA provides the user with an easy interface to annotate the UML diagrams and uses a Bayesian algorithm to predict the system reliability. This thesis presents the methodology of ECRA, the steps taken to develop it, and its applications

    Translating Video Recordings of Mobile App Usages into Replayable Scenarios

    Full text link
    Screen recordings of mobile applications are easy to obtain and capture a wealth of information pertinent to software developers (e.g., bugs or feature requests), making them a popular mechanism for crowdsourced app feedback. Thus, these videos are becoming a common artifact that developers must manage. In light of unique mobile development constraints, including swift release cycles and rapidly evolving platforms, automated techniques for analyzing all types of rich software artifacts provide benefit to mobile developers. Unfortunately, automatically analyzing screen recordings presents serious challenges, due to their graphical nature, compared to other types of (textual) artifacts. To address these challenges, this paper introduces V2S, a lightweight, automated approach for translating video recordings of Android app usages into replayable scenarios. V2S is based primarily on computer vision techniques and adapts recent solutions for object detection and image classification to detect and classify user actions captured in a video, and convert these into a replayable test scenario. We performed an extensive evaluation of V2S involving 175 videos depicting 3,534 GUI-based actions collected from users exercising features and reproducing bugs from over 80 popular Android apps. Our results illustrate that V2S can accurately replay scenarios from screen recordings, and is capable of reproducing \approx 89% of our collected videos with minimal overhead. A case study with three industrial partners illustrates the potential usefulness of V2S from the viewpoint of developers.Comment: In proceedings of the 42nd International Conference on Software Engineering (ICSE'20), 13 page

    Model compilation: An approach to automated model derivation

    Get PDF
    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration

    Script Language for Image Processing

    Get PDF
    This paper proposes a design and structure of script language which is intended for easy description and prototyping of high-level image processing operations. The image operations are meant to be composed from basic building blocks represented by either C/C++ functions or appropriate block connections in FPGA (Field-Programmable Gate Array) circuits. The proposed language is designed for use in systems for rapid prototyping and testing of image processing applications as well as for final implementations of the applications. The integration of language into such systems is discussed as well as explanations of parts of the image processing system as seen through the interface of the proposed scripting language. The paper targets structures and syntax of the language, parallelization of high-level image operations and communication between the multiple instances of interpreters of the scripts
    corecore