72 research outputs found

    Fifty years of the Psychology of Programming

    Get PDF
    This paper reflects on the evolution (past, present and future) of the ‘psychology of programming' over the 50 year period of this anniversary issue. The International Journal of Human-Computer Studies (IJHCS) has been a key venue for much seminal work in this field, including its first foundations, and we review the changing research concerns seen in publications over these five decades. We relate this thematic evolution to research taking place over the same period within more specialist communities, especially the Psychology of Programming Interest Group (PPIG), the Empirical Studies of Programming series (ESP), and the ongoing community in Visual Languages and Human-Centric Computing (VL/HCC). Many other communities have interacted with psychology of programming, both influenced by research published within the specialist groups, and in turn influencing research priorities. We end with an overview of the core theories that have been developed over this period, as an introductory resource for new researchers, and also with the authors’ own analysis of key priorities for future research

    BARR-C:2018 and MISRA C:2012: Synergy Between the Two Most Widely Used C Coding Standards

    Full text link
    The Barr Group's Embedded C Coding Standard (BARR-C:2018, which originates from the 2009 Netrino's Embedded C Coding Standard) is, for coding standards used by the embedded system industry, second only in popularity to MISRA C. However, the choice between MISRA C:2012 and BARR-C:2018 needs not be a hard decision since they are complementary in two quite different ways. On the one hand, BARR-C:2018 has removed all the incompatibilities with respect to MISRA C:2012 that were present in the previous edition (BARR-C:2013). As a result, disregarding programming style, BARR-C:2018 defines a subset of C that, while preventing a significant number of programming errors, is larger than the one defined by MISRA C:2012. On the other hand, concerning programming style, whereas MISRA C leaves this to individual organizations, BARR-C:2018 defines a programming style aimed primarily at minimizing programming errors. As a result, BARR-C:2018 can be seen as a first, dramatically useful step to C language subsetting that is suitable for all kinds of projects; critical projects can then evolve toward MISRA C:2012 compliance smoothly while maintaining the BARR-C programming style. In this paper, we introduce BARR-C:2018, we describe its relationship with MISRA C:2012, and we discuss the parallel and serial adoption of the two coding standards.Comment: 14 pages, 1 figur

    Mechanism-Based Approach to the Economic Evaluation of Pharmaceuticals

    Get PDF
    Tese de mestrado, Ciências Biofarmacêuticas, Universidade de Lisboa, Faculdade de Farmácia, 2018A farmacoeconomia é uma disciplina que avalia o uso de medicamentos em termos de recursos na maximização da saúde da população. Dado que os recursos para os cuidados de saúde são finitos, a avaliação económica envolve a estimativa do custo de oportunidade, i.e., os benefícios marginais perdidos como resultado do deslocamento de tratamentos ou serviços existentes para financiar novos medicamentos. A farmacocinética é a ciência que visa o estudo do movimento de fármacos no organismo, o que inclui a absorção, distribuição, metabolismo e eliminação destes e seus metabolitos. Com o advento da química analítica e métodos de quantificação sofisticados, bem como de um aumento do poder de computação, a farmacocinética como ciência tem tido um desenvolvimento exponencial. Uma das áreas da farmacocinética que se tem desenvolvido mais é a farmacocinética populacional: apesar da farmacocinética de um fármaco poder ser estudada individualmente em cada indivíduo, a abordagem populacional é benéfica para o estudo de grupos de pacientes que são difíceis de investigar, como a população de bebés prematuros, pacientes com insuficiência hepática ou renal. Na farmacocinética populacional, cada indivíduo é avaliado simultaneamente com o modelo de efeitos mistos não-lineares (parametrização). Não linear significa que a variável dependente dessa concentração está relacionada não linearmente à associação de variáveis independentes e parâmetros do modelo. Efeitos fixos refere-se aos parâmetros que não se alteram em indivíduos, enquanto o efeito aleatório se refere àqueles parâmetros que se alteram através dos indivíduos. O principal objetivo das estimativas de modelação farmacocinética populacional é o de procurar os parâmetros de farmacocinética populacional e fonte de variabilidade. Os objetivos restantes consistem em concentrações observadas da dose administrada pela deteção das covariáveis preditivas na população avaliada. Em farmacocinética populacional, os indivíduos poderão apenas fornecer dados de concentração plasmática escassos. As cinco principais partes fundamentais para a construção de um modelo farmacocinético populacional incluem: dados, modelo estrutural, modelo estatístico, modelo de covariáveis e software de modelação. Os modelos estruturais definem o perfil de concentração plasmática ao longo do tempo nos indivíduos. Os modelos estatísticos descrevem a variabilidade aleatória na população que não é explicável (como a variabilidade entre as ocasiões), entre a variabilidade do indivíduo ou a variabilidade residual. Os modelos de covariável demonstram a variabilidade estimada pelas características da população, como covariáveis. O software de modelação, como o software de modelação de efeitos mistos não linear, permite a combinação de dados e modelos e aplica o método de estimativa para avaliar parâmetros para os modelos estatísticos, estruturais e de covariáveis que definem os dados. Na modelação farmacocinética populacional, o software possui um algoritmo de minimização do valor da função objetivo, praticando a estimativa de máxima verossimilhança. No momento da adaptação dos dados populacionais, a concentração estimada para cada indivíduo é influenciada pela variância dos parâmetros populacionais e de cada parâmetro individual, e pela variação em cada valor das concentrações previstas e observadas. A avaliação da probabilidade marginal depende dos parâmetros de efeito aleatório (η) e efeito fixo da população. Não há existência de solução analítica para verossimilhança marginal. Enquanto buscava a máxima verossimilhança, inúmeras abordagens foram aplicadas para a aproximação da verossimilhança marginal. O FOCE e o LAPLACE são as abordagens mais antigas que estimam a verdadeira verossimilhança com uma função adicional simplificada. O trabalho de dissertação no âmbito do Mestrado em Ciências Biofarmacêuticas teve por objetivo o estabelecimento de ferramentas baseadas em simulação de dados com base em modelos farmacocinéticos populacionais para uma posterior análise farmacoeconómica. Neste trabalho utilizou-se a informação disponível para a combinação fixa de Glecaprevir e Pibrentasvir (Mavyret®), medicamento usado no tratamento do vírus da hepatite C crónica. As simulações foram realizadas utilizando o software R e seu pacote Shiny. O R é uma linguagem para análise de dados de computação estatística e gráfica. A população simulada no modelo foi agrupada de acordo com as covariáveis similares, sendo simulados 1000 indivíduos por cenário. O relatório de submissão da FDA do Mavyret® foi usado como referência na modelação farmacocinética populacional. Neste relatório encontra-se descrito o modelo farmacocinético populacional desenvolvido, com base nos estudos clínicos realizados para o medicamento. No modelo descrito, foram identificadas diferentes covariáveis. O modelo descrito foi então implementado no software R e o impacto das covariáveis foi estudado com a aplicação Shiny. A população observada foi categorizada em diferentes grupos, tais como doentes tratados com Glecaprevir / Pibrentasvir com compromisso renal e doentes com compromisso renal e cirrose. Foram criados modelos individuais para cada um dos grupos e a comparação entre cada grupo e seus perfis de concentração-tempo foi realizada pelo uso do navegador R e Shiny, onde a atualização nos resultados pode ser vista automaticamente com a alteração em qualquer da covariável ou da variável. Para os diferentes modelos finais incorporados no software e para a população simulada, foram calculados os parâmetros farmacocinéticos AUC e Cmax para posterior análise estatística descritiva. Apesar da implementação dos modelos farmacocinéticos populacionais ter sido realizada em R e Shiny, e os dados terem sido simulados para os diferentes cenários populacionais, a aplicação de metodologias farmacoeconómicas não foram realizadas.Pharmacoeconomics is the discipline concerned with optimal allocation of resources to maximize population health from the use of medicines. Given that resources for health care are finite, economic evaluation involves estimation of the opportunity cost, that is, the marginal benefits forgone as a result of displacing existing treatments or services to fund new medicines. The purpose of this study is to use tools in pharmacoeconomic analysis for the examination of the positive and adverse impact of the fixed dose combination of Glecaprevir and Pibrentasvir (Mavyret®), used to treat chronic hepatitis C virus. In order to examine the effects in pharmacoeconomics analysis, a population pharmacokinetic model was developed using R software and its package Shiny, where R is a language for data analysis of statistical computing and graphics. The population simulated in the model was grouped according to the similar covariates with the number (n) of 1000. FDA submission report for Mavyret® was used as reference regarding population pharmacokinetics modelling, developed based on the clinical studies performed for the drug product. In the described model, different covariates were identified. The described model was implemented in the R software and the impact of covariates wwas studied with Shiny application. The population observed was categorized in different groups such as patients treated with Glecaprevir/Pibrentasvir having renal impairment and patients with renal impairment and Cirrhosis. Individual models were created for each of the groups and the comparison between each group and their concentration-time profiles was observed that was made easier by the use of R and Shiny web browser where the update in results can be seen spontaneously with the change in any of the covariate or the variable. Different final models were produced and for the simulated population, the pharmacokinetic parameters AUC and Cmax were calculated for descriptive statistical analysis. Despite the implementation of population pharmacokinetics models has been accomplished in R and Shiny, and data has been simulated for different population scenarios, pharmacoeconomic modelling and application of pharmacoeconomic methodologies was not practised

    Applications of Formal Methods to Specification and Safety of Avionics Software

    Get PDF
    This report treats several topics in applications of formal methods to avionics software development. Most of these topics concern decision tables, an orderly, easy-to-understand format for formally specifying complex choices among alternative courses of action. The topics relating to decision tables include: generalizations fo decision tables that are more concise and support the use of decision tables in a refinement-based formal software development process; a formalism for systems of decision tables with behaviors; an exposition of Parnas tables for users of decision tables; and test coverage criteria and decision tables. We outline features of a revised version of ORA's decision table tool, Tablewise, which will support many of the new ideas described in this report. We also survey formal safety analysis of specifications and software

    Standardized development of computer software. Part 1: Methods

    Get PDF
    This work is a two-volume set on standards for modern software engineering methodology. This volume presents a tutorial and practical guide to the efficient development of reliable computer software, a unified and coordinated discipline for design, coding, testing, documentation, and project organization and management. The aim of the monograph is to provide formal disciplines for increasing the probability of securing software that is characterized by high degrees of initial correctness, readability, and maintainability, and to promote practices which aid in the consistent and orderly development of a total software system within schedule and budgetary constraints. These disciplines are set forth as a set of rules to be applied during software development to drastically reduce the time traditionally spent in debugging, to increase documentation quality, to foster understandability among those who must come in contact with it, and to facilitate operations and alterations of the program as requirements on the program environment change

    Inherently flexible software

    Get PDF
    Software evolution is an important and expensive consequence of software. As Lehman's First Law of Program Evolution states, software must be changed to satisfy new user requirements or become progressively less useful to the stakeholders of the software. Software evolution is difficult for a multitude of different reasons, most notably because of an inherent lack of evolveability of software, design decisions and existing requirements which are difficult to change and conflicts between new requirements and existing assumptions and requirements. Software engineering has traditionally focussed on improvements in software development techniques, with little conscious regard for their effects on software evolution. The thesis emphasises design for change, a philosophy that stems from ideas in preventive maintenance and places the ease of software evolution more at the centre of the design of software systems than it is at present. The approach involves exploring issues of evolveability, such as adaptability, flexibility and extensibility with respect to existing software languages, models and architectures. A software model, SEvEn, is proposed which improves on the evolveability of these existing software models by improving on their adaptability, flexibility and extensibility, and provides a way to determine the ripple effects of changes by providing a reflective model of a software system. The main conclusion is that, whilst software evolveability can be improved, complete adaptability, flexibility and extensibility of a software system is not possible, hi addition, ripple effects can't be completely eradicated because assumptions will always persist in a software system and new requirements may conflict with existing requirements. However, the proposed reflective model of software (which consists of a set of software entities, or abstractions, with the characteristic of increased evolveability) provides trace-ability of ripple effects because it explicitly models the dependencies that exist between software entities, determines how software entities can change, ascertains the adaptability of software entities to changes in other software entities on which they depend and determines how changes to software entities affect those software entities that depend on them

    Differentiable world programs

    Full text link
    L'intelligence artificielle (IA) moderne a ouvert de nouvelles perspectives prometteuses pour la création de robots intelligents. En particulier, les architectures d'apprentissage basées sur le gradient (réseaux neuronaux profonds) ont considérablement amélioré la compréhension des scènes 3D en termes de perception, de raisonnement et d'action. Cependant, ces progrès ont affaibli l'attrait de nombreuses techniques ``classiques'' développées au cours des dernières décennies. Nous postulons qu'un mélange de méthodes ``classiques'' et ``apprises'' est la voie la plus prometteuse pour développer des modèles du monde flexibles, interprétables et exploitables : une nécessité pour les agents intelligents incorporés. La question centrale de cette thèse est : ``Quelle est la manière idéale de combiner les techniques classiques avec des architectures d'apprentissage basées sur le gradient pour une compréhension riche du monde 3D ?''. Cette vision ouvre la voie à une multitude d'applications qui ont un impact fondamental sur la façon dont les agents physiques perçoivent et interagissent avec leur environnement. Cette thèse, appelée ``programmes différentiables pour modèler l'environnement'', unifie les efforts de plusieurs domaines étroitement liés mais actuellement disjoints, notamment la robotique, la vision par ordinateur, l'infographie et l'IA. Ma première contribution---gradSLAM--- est un système de localisation et de cartographie simultanées (SLAM) dense et entièrement différentiable. En permettant le calcul du gradient à travers des composants autrement non différentiables tels que l'optimisation non linéaire par moindres carrés, le raycasting, l'odométrie visuelle et la cartographie dense, gradSLAM ouvre de nouvelles voies pour intégrer la reconstruction 3D classique et l'apprentissage profond. Ma deuxième contribution - taskography - propose une sparsification conditionnée par la tâche de grandes scènes 3D encodées sous forme de graphes de scènes 3D. Cela permet aux planificateurs classiques d'égaler (et de surpasser) les planificateurs de pointe basés sur l'apprentissage en concentrant le calcul sur les attributs de la scène pertinents pour la tâche. Ma troisième et dernière contribution---gradSim--- est un simulateur entièrement différentiable qui combine des moteurs physiques et graphiques différentiables pour permettre l'estimation des paramètres physiques et le contrôle visuomoteur, uniquement à partir de vidéos ou d'une image fixe.Modern artificial intelligence (AI) has created exciting new opportunities for building intelligent robots. In particular, gradient-based learning architectures (deep neural networks) have tremendously improved 3D scene understanding in terms of perception, reasoning, and action. However, these advancements have undermined many ``classical'' techniques developed over the last few decades. We postulate that a blend of ``classical'' and ``learned'' methods is the most promising path to developing flexible, interpretable, and actionable models of the world: a necessity for intelligent embodied agents. ``What is the ideal way to combine classical techniques with gradient-based learning architectures for a rich understanding of the 3D world?'' is the central question in this dissertation. This understanding enables a multitude of applications that fundamentally impact how embodied agents perceive and interact with their environment. This dissertation, dubbed ``differentiable world programs'', unifies efforts from multiple closely-related but currently-disjoint fields including robotics, computer vision, computer graphics, and AI. Our first contribution---gradSLAM---is a fully differentiable dense simultaneous localization and mapping (SLAM) system. By enabling gradient computation through otherwise non-differentiable components such as nonlinear least squares optimization, ray casting, visual odometry, and dense mapping, gradSLAM opens up new avenues for integrating classical 3D reconstruction and deep learning. Our second contribution---taskography---proposes a task-conditioned sparsification of large 3D scenes encoded as 3D scene graphs. This enables classical planners to match (and surpass) state-of-the-art learning-based planners by focusing computation on task-relevant scene attributes. Our third and final contribution---gradSim---is a fully differentiable simulator that composes differentiable physics and graphics engines to enable physical parameter estimation and visuomotor control, solely from videos or a still image

    Code Generation for High Performance PDE Solvers on Modern Architectures

    Get PDF
    Numerical simulation with partial differential equations is an important discipline in high performance computing. Notable application areas include geosciences, fluid dynamics, solid mechanics and electromagnetics. Recent hardware developments have made it increasingly hard to achieve very good performance. This is both due to a lack of numerical algorithms suited for the hardware and efficient implementations of these algorithms not being available. Modern CPUs require a sufficiently high arithmetic intensity in order to unfold their full potential. In this thesis, we use a numerical scheme that is well-suited for this scenario: The Discontinuous Galerkin Finite Element Method on cuboid meshes can be implemented with optimal complexity exploiting the tensor product structure of basis functions and quadrature formulae using a technique called sum factorization. A matrix-free implementation of this scheme significantly lowers the memory footprint of the method and delivers a fully compute-bound algorithm. An efficient implementation of this scheme for a modern CPU requires maximum use of the processor’s SIMD units. General purpose compilers are not capable of autovectorizing traditional PDE simulation codes, requiring high performance implementations to explicitly spell out SIMD instructions. With the SIMD width increasing in the last years (reaching its current peak at 512 bits in the Intel Skylake architecture) and programming languages not providing tools to directly target SIMD units, such code suffers from a performance portability issue. This work proposes generative programming as a solution to this issue. To this end, we develop a toolchain that translates a PDE problem expressed in a domain specific language into a piece of machine-dependent, optimized C++ code. This toolchain is embedded into the existing user workflow of the DUNE project, an open source framework for the numerical solution of PDEs. Compared to other such toolchains, special emphasis is put on an intermediate representation that enables performance-oriented transformations. Furthermore, this thesis defines a new class of SIMD vectorization strategies that operate on batches of subkernels within one integration kernel. The space of these vectorization strategies is explored systematically from within the code generator in an autotuning procedure. We demonstrate the performance of our vectorization strategies and their implementation by providing measurements on the Intel Haswell and Intel Skylake architectures. We present numbers for the diffusion-reaction equation, the Stokes equations and Maxwell’s equations, achieving up to 40% of the machine’s theoretical floating point performance for an application of the DG operator
    • …
    corecore