51,860 research outputs found

    Causality, Information and Biological Computation: An algorithmic software approach to life, disease and the immune system

    Full text link
    Biology has taken strong steps towards becoming a computer science aiming at reprogramming nature after the realisation that nature herself has reprogrammed organisms by harnessing the power of natural selection and the digital prescriptive nature of replicating DNA. Here we further unpack ideas related to computability, algorithmic information theory and software engineering, in the context of the extent to which biology can be (re)programmed, and with how we may go about doing so in a more systematic way with all the tools and concepts offered by theoretical computer science in a translation exercise from computing to molecular biology and back. These concepts provide a means to a hierarchical organization thereby blurring previously clear-cut lines between concepts like matter and life, or between tumour types that are otherwise taken as different and may not have however a different cause. This does not diminish the properties of life or make its components and functions less interesting. On the contrary, this approach makes for a more encompassing and integrated view of nature, one that subsumes observer and observed within the same system, and can generate new perspectives and tools with which to view complex diseases like cancer, approaching them afresh from a software-engineering viewpoint that casts evolution in the role of programmer, cells as computing machines, DNA and genes as instructions and computer programs, viruses as hacking devices, the immune system as a software debugging tool, and diseases as an information-theoretic battlefield where all these forces deploy. We show how information theory and algorithmic programming may explain fundamental mechanisms of life and death.Comment: 30 pages, 8 figures. Invited chapter contribution to Information and Causality: From Matter to Life. Sara I. Walker, Paul C.W. Davies and George Ellis (eds.), Cambridge University Pres

    Simple Muscle Architecture Analysis (SMA): an ImageJ macro tool to automate measurements in B-mode ultrasound scans

    Full text link
    In vivo measurements of muscle architecture (i.e. the spatial arrangement of muscle fascicles) are routinely included in research and clinical settings to monitor muscle structure, function and plasticity. However, in most cases such measurements are performed manually, and more reliable and time-efficient automated methods are either lacking completely, or are inaccessible to those without expertise in image analysis. In this work, we propose an ImageJ script to automate the entire analysis process of muscle architecture in ultrasound images: Simple Muscle Architecture Analysis (SMA). Images are filtered in the spatial and frequency domains with built-in commands and external plugins to highlight aponeuroses and fascicles. Fascicle dominant orientation is then computed in regions of interest using the OrientationJ plugin. Bland-Altman plots of analyses performed manually or with SMA indicates that the automated analysis does not induce any systematic bias and that both methods agree equally through the range of measurements. Our test results illustrate the suitability of SMA to analyse images from superficial muscles acquired with a broad range of ultrasound settings.Comment: 8 pages, 7 figures, 1 appendi

    Blinded assessment of treatment effects utilizing information about the randomization block length

    Get PDF
    It is essential for the integrity of double-blind clinical trials that during the study course the individual treatment allocations of the patients as well as the treatment effect remain unknown to any involved person. Recently, methods have been proposed for which it was claimed that they would allow reliable estimation of the treatment effect based on blinded data by using information about the block length of the randomization procedure. If this would hold true, it would be difficult to preserve blindness without taking further measures. The suggested procedures apply to continuous data. We investigate the properties of these methods thoroughly by repeated simulations per scenario. Furthermore, a method for blinded treatment effect estimation in case of binary data is proposed, and blinded tests for treatment group differences are developed both for continuous and binary data. We report results of comprehensive simulation studies that investigate the features of these procedures. It is shown that for sample sizes and treatment effects which are typical in clinical trials, no reliable inference can be made on the treatment group difference which is due to the bias and imprecision of the blinded estimates

    A Systematic Approach to Constructing Families of Incremental Topology Control Algorithms Using Graph Transformation

    Full text link
    In the communication systems domain, constructing and maintaining network topologies via topology control (TC) algorithms is an important cross-cutting research area. Network topologies are usually modeled using attributed graphs whose nodes and edges represent the network nodes and their interconnecting links. A key requirement of TC algorithms is to fulfill certain consistency and optimization properties to ensure a high quality of service. Still, few attempts have been made to constructively integrate these properties into the development process of TC algorithms. Furthermore, even though many TC algorithms share substantial parts (such as structural patterns or tie-breaking strategies), few works constructively leverage these commonalities and differences of TC algorithms systematically. In previous work, we addressed the constructive integration of consistency properties into the development process. We outlined a constructive, model-driven methodology for designing individual TC algorithms. Valid and high-quality topologies are characterized using declarative graph constraints; TC algorithms are specified using programmed graph transformation. We applied a well-known static analysis technique to refine a given TC algorithm in a way that the resulting algorithm preserves the specified graph constraints. In this paper, we extend our constructive methodology by generalizing it to support the specification of families of TC algorithms. To show the feasibility of our approach, we reneging six existing TC algorithms and develop e-kTC, a novel energy-efficient variant of the TC algorithm kTC. Finally, we evaluate a subset of the specified TC algorithms using a new tool integration of the graph transformation tool eMoflon and the Simonstrator network simulation framework.Comment: Corresponds to the accepted manuscrip

    Validation of Constraints Among Configuration Parameters Using Search-Based Combinatorial Interaction Testing

    Get PDF
    The appeal of highly-configurable software systems lies in their adaptability to users’ needs. Search-based Combinatorial Interaction Testing (CIT) techniques have been specifically developed to drive the systematic testing of such highly-configurable systems. In order to apply these, it is paramount to devise a model of parameter configurations which conforms to the software implementation. This is a non-trivial task. Therefore, we extend traditional search-based CIT by devising 4 new testing policies able to check if the model correctly identifies constraints among the various software parameters. Our experiments show that one of our new policies is able to detect faults both in the model and the software implementation that are missed by the standard approaches

    Um método e uma ferramenta para testes baseados em modelos para linhas de produto software

    Get PDF
    Orientador: Eliane MartinsDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: As linhas de produtos de software (LPS) estão ganhando interesse devido à crescente demanda por produtos personalizáveis. Tal se deve, em parte, por que as LPS são um meio eficiente e efetivo de entregar produtos com maior qualidade a um custo menor. Em uma LPS, produtos têm requisitos em comum e também, características específicas a cada um. Testar se um produto implementa os requisitos comuns e específicos é um importante passo para garantir uma boa qualidade. No entanto, o teste de uma LPS é uma tarefa complexa, uma vez que a variedade de produtos que podem ser derivados a partir da combinação de características comuns e específicas é enorme. Mesmo que se escolha apenas alguns produtos selecionados, o esforço para testá-los ainda assim é grande, dado que os produtos variam em termos das características específicas selecionadas. Portanto, reutilizar casos de teste de um produto para o outro para determinar se satisfazem os requisitos funcionais, pode não ser possível. Os testes baseados em modelos (MBT) podem ser úteis neste caso, nos quais um modelo de comportamento pode ser obtido a partir dos requisitos e este modelo pode ser usado para a geração automática de casos de teste. Neste trabalho é apresentada uma abordagem em que os requisitos SPL são centrados em casos de uso. Casos de uso (UC) são um formato popular para representar os requisitos. A partir das descrições de casos de uso escritas em um formato semi-estruturado e contendo a especificação de variabilidade, os modelos de comportamento são gerados automaticamente para um produto sob teste, na forma de um modelo de máquina de estado. Construir uma máquina de estado não é trivial para a maioria dos profissionais, que estão mais habituados com descrições textuais e informais dos requisitos. Em geral, a criação manual de modelos de máquinas de estado a partir de UCs pode ser demorado e propenso a erros. O objetivo é fornecer aos engenheiros de teste um método que os guie na criação dos artefatos necessários para que uma versão preliminar de um modelo de estado seja extraída automaticamente dos requisitos. Este modelo preliminar pode ser refinado para tornar-se adequado para uma ferramenta de geração de casos de teste. Para esse processo de refinamento também são fornecidas algumas diretrizes. Como prova de conceito, desenvolveu-se um protótipo de uma ferramenta, MARITACA, que utiliza técnicas de processamento de língua natural para extrair as máquinas de estado a partir das descrições dos casos de uso. O texto apresenta o uso do método e da ferramenta em um exemplo ilustrativo, obtido da literatura, e em uma família de aplicações distribuídas tolerantes a falhas. Este estudo mostrou a aplicabilidade do método proposto. Uma das preocupações nos testes de SPL é a geração de casos de teste redundantes de um produto para outro. Os resultados, embora preliminares, mostraram que a maioria dos casos de teste gerados para um novo produto não são redundantes, pois envolvem características específicas de cada produtoAbstract: Software product lines (SPL) are gaining interest because of the increasing demand for customizable products. This is partly because SPLs are an efficient and effective means of delivering products with higher quality at a lower cost. In SPL, products have common requirements and also, specific features for each one. Testing whether a product implements common and specific requirements is an important step to ensure good quality of the derived products. However, testing a SPL is a complex task, since the variety of products that can be derived from the combination of common and specific features is huge. Even if only a few specific products are selected, the effort to test them is still significant, since the products vary in terms of the specific features that are selected. Therefore, reusing test cases from one product to another to determine whether they satisfy the functional requirements may not be possible. Model-based testing (MBT) may be useful in this case, in which a behavior model can be obtained from the requirements and this model can be used for automatic test cases generation. This work presents model-based product testing approach (MBPTA) for software product lines, in which requirements are centered on use cases. Use Cases (UC) are a popular format for representing requirements. From the use case descriptions written in the form of a semi-structured format and containing the variability specification, the behavior models are automatically generated for a product under test, in the form of a state machine model. Building a state machine is not a trivial task for most practitioners, who are more familiarized with textual and informal descriptions of requirements. In general, the manual creation of state machine models from UCs can be time-consuming and prone to errors. The goal is to provide the test engineers with a method that guides them in the creation of artifacts necessary to extract a preliminary version of a state model from the requirements. This preliminary model can be refined to become suitable for a test case generation tool. MBPTA also provides guidelines for the refinement process of the preliminary model. As proof of concept, a prototype of a tool was developed, MARITACA, which uses natural language processing techniques to extract state machines from the use case descriptions. The text presents the use of the method and the tool in an illustrative example, obtained from the literature, and in a family of distributed fault-tolerant applications. This study showed the applicability of the proposed method. One of the concerns in SPL testing is the generation of redundant test cases from one product to another. The results, though preliminary, showed that most of the test cases generated for a new product are not redundant because they involve specific features of each productMestradoCiência da ComputaçãoMestra em Ciência da ComputaçãoCAPE

    Time-Space Efficient Regression Testing for Configurable Systems

    Full text link
    Configurable systems are those that can be adapted from a set of options. They are prevalent and testing them is important and challenging. Existing approaches for testing configurable systems are either unsound (i.e., they can miss fault-revealing configurations) or do not scale. This paper proposes EvoSPLat, a regression testing technique for configurable systems. EvoSPLat builds on our previously-developed technique, SPLat, which explores all dynamically reachable configurations from a test. EvoSPLat is tuned for two scenarios of use in regression testing: Regression Configuration Selection (RCS) and Regression Test Selection (RTS). EvoSPLat for RCS prunes configurations (not tests) that are not impacted by changes whereas EvoSPLat for RTS prunes tests (not configurations) which are not impacted by changes. Handling both scenarios in the context of evolution is important. Experimental results show that EvoSPLat is promising. We observed a substantial reduction in time (22%) and in the number of configurations (45%) for configurable Java programs. In a case study on a large real-world configurable system (GCC), EvoSPLat reduced 35% of the running time. Comparing EvoSPLat with sampling techniques, 2-wise was the most efficient technique, but it missed two bugs whereas EvoSPLat detected all bugs four times faster than 6-wise, on average.Comment: 14 page

    AIOCJ: A Choreographic Framework for Safe Adaptive Distributed Applications

    Get PDF
    We present AIOCJ, a framework for programming distributed adaptive applications. Applications are programmed using AIOC, a choreographic language suited for expressing patterns of interaction from a global point of view. AIOC allows the programmer to specify which parts of the application can be adapted. Adaptation takes place at runtime by means of rules, which can change during the execution to tackle possibly unforeseen adaptation needs. AIOCJ relies on a solid theory that ensures applications to be deadlock-free by construction also after adaptation. We describe the architecture of AIOCJ, the design of the AIOC language, and an empirical validation of the framework.Comment: Technical Repor
    corecore