5 research outputs found

    The Beauty of Complex Designs

    Get PDF
    The increasing use of omics data in epidemiology enables many novel study designs, but also introduces challenges for data analysis. We describe the possibilities for systems epidemiological designs in the Norwegian Women and Cancer (NOWAC) study and show how the complexity of NOWAC enables many beautiful new study designs. We discuss the challenges of implementing designs and analyzing data. Finally, we propose a systems architecture for swift design and exploration of epidemiological studies

    Deprecating the Observer Pattern with Scala.React

    Get PDF
    Programming interactive systems by means of the observer pattern is hard and error-prone yet is still the implementation standard in many production environments. We show how to integrate different reactive programming abstractions into a single framework that help migrate from observer-based event handling logic to more declarative implementations. Our central API layer embeds an extensible higher-order data-flow DSL into our host language. This embedding is enabled by a continuation passing style transformation

    Toward a Collaborative Platform for Hybrid Designs Sharing a Common Cohort

    Get PDF
    This doctoral thesis binds together four included papers in a thematical whole and is simultaneously an independent work proposing a platform facilitating epidemiological research. Population-based prospective cohort studies typically recruit a relatively large group of participants representative of a studied population and follow them over years or decades. This group of participants is called a cohort. As part of the study, the participants may be asked to answer extensive questionnaires, undergo medical examinations, donate blood samples, and participate in several rounds of follow-ups. The collected data can also include information from other sources, such as health registers. In prospective cohort studies, the participants initially do not have the investigated diagnoses, but statistically, a certain percentage will be diagnosed with a disease yearly. The studies enable the researchers to investigate how those who got a disease differ from those who did not. Often, many new studies can be nested within a cohort study. Data for a subgroup of the cohort is then selected and analyzed. A new study combined with an existing cohort is said to have a hybrid design. When a research group uses the same cohort as a basis for multiple new studies, these studies often have similarities regarding the workflow for designing the study and analysis. The thesis shows the potential for developing a platform encouraging the reuse of work from previous studies and systematizing the study design workflows to enhance time efficiency and reduce the risk of errors. However, the study data are subject to strict acts and regulations pertaining to privacy and research ethics. Therefore, the data must be stored and accessed within a secured IT environment where researchers log in to conduct analyses, with minimal possibilities to install analytics software not already provided by default. Further, transferring the data from the secured IT environment to a local computer or a public cloud is prohibited. Nevertheless, researchers can usually upload and run script files, e.g., written in R and run in R-studio. A consequence is that researchers - often having limited software engineering skills - may rely mainly on self-written code for their analyses, possibly unsystematically developed with a high risk of errors and reinventing solutions solved in preceding studies within the group. The thesis makes a case for a platform providing a collaboration software as a service (SaaS) addressing the challenges of the described research context and proposes its architecture and design. Its main characteristic, and contribution, is the separation of concerns between the SaaS, which operates independently of the data, and a secured IT environment where data can be accessed and analyzed. The platform lets the researchers define the data analysis for the study using the cloud-based software, which is then automatically transformed into an executable version represented as source code in a scripting language already supported by the secure environment where the data resides. The author has not found systems solving the same problem similarly. However, the work is informed by cloud computing, workflow management systems, data analysis pipelines, low-code, no-code, and model-driven development

    High-Level GPU Programming: Domain-Specific Optimization and Inference

    Get PDF
    When writing computer software one is often forced to balance the need for high run-time performance with high programmer productivity. By using a high-level language it is often possible to cut development times, but this typically comes at the cost of reduced run-time performance. Using a lower-level language, programs can be made very efficient but at the cost of increased development time. Real-time computer graphics is an area where there are very high demands on both performance and visual quality. Typically, large portions of such applications are written in lower-level languages and also rely on dedicated hardware, in the form of programmable graphics processing units (GPUs), for handling computationally demanding rendering algorithms. These GPUs are parallel stream processors, specialized towards computer graphics, that have computational performance more than a magnitude higher than corresponding CPUs. This has revolutionized computer graphics and also led to GPUs being used to solve more general numerical problems, such as fluid and physics simulation, protein folding, image processing, and databases. Unfortunately, the highly specialized nature of GPUs has also made them difficult to program. In this dissertation we show that GPUs can be programmed at a higher level, while maintaining performance, compared to current lower-level languages. By constructing a domain-specific language (DSL), which provides appropriate domain-specific abstractions and user-annotations, it is possible to write programs in a more abstract and modular manner. Using knowledge of the domain it is possible for the DSL compiler to generate very efficient code. We show that, by experiment, the performance of our DSLs is equal to that of GPU programs written by hand using current low-level languages. Also, control over the trade-offs between visual quality and performance is retained. In the papers included in this dissertation, we present domain-specific languages targeted at numerical processing and computer graphics, respectively. These DSL have been implemented as embedded languages in Python, a dynamic programming language that provide a rich set of high-level features. In this dissertation we show how these features can be used to facilitate the construction of embedded languages

    Reusando Modelos Conceituais : Linguagem e Compilador

    Get PDF
    TCC(graduação) - Universidade Federal de Santa Catarina. Centro Tecnológico. Ciências da Computação.Este relatório apresenta uma linguagem textual para modelagem con- ceitual (baseada em classes/associações da UML e em restrições da OCL) e um compilador que pode gerar código em qualquer linguagem ou tecnologia através de templates de texto extensíveis. A linguagem e o compilador permitem a especificação da informação gerenciada por sistemas de software cada vez mais distribuídos e em constante mu- dança. A partir de uma única fonte, a geração de código automática mantém as implementações consistentes com sua especificação atra- vés das diferentes plataformas e tecnologias. Além disso, na medida em que o horizonte tecnológico se expande, os templates textuais po- dem ser modificados para adotar novas tecnologias. Diferentemente de outras abordagens, tais como MDA e MPS, espera-se que o suporte fer- ramental acompanhando esta linguagem, juntamente com sua natureza textual, facilite a integração do desenvolvimento de software dirigido por modelos no fluxo de trabalho dos desenvolvedores de software
    corecore