4,673 research outputs found

    Monotonic Abstraction Techniques: from Parametric to Software Model Checking

    Full text link
    Monotonic abstraction is a technique introduced in model checking parameterized distributed systems in order to cope with transitions containing global conditions within guards. The technique has been re-interpreted in a declarative setting in previous papers of ours and applied to the verification of fault tolerant systems under the so-called "stopping failures" model. The declarative reinterpretation consists in logical techniques (quantifier relativizations and, especially, quantifier instantiations) making sense in a broader context. In fact, we recently showed that such techniques can over-approximate array accelerations, so that they can be employed as a meaningful (and practically effective) component of CEGAR loops in software model checking too.Comment: In Proceedings MOD* 2014, arXiv:1411.345

    Ball: An R package for detecting distribution difference and association in metric spaces

    Full text link
    The rapid development of modern technology facilitates the appearance of numerous unprecedented complex data which do not satisfy the axioms of Euclidean geometry, while most of the statistical hypothesis tests are available in Euclidean or Hilbert spaces. To properly analyze the data of more complicated structures, efforts have been made to solve the fundamental test problems in more general spaces. In this paper, a publicly available R package Ball is provided to implement Ball statistical test procedures for K-sample distribution comparison and test of mutual independence in metric spaces, which extend the test procedures for two sample distribution comparison and test of independence. The tailormade algorithms as well as engineering techniques are employed on the Ball package to speed up computation to the best of our ability. Two real data analyses and several numerical studies have been performed and the results certify the powerfulness of Ball package in analyzing complex data, e.g., spherical data and symmetric positive matrix data

    A Confidence Habitats Methodology in MR Quantitative Diffusion for the Classification of Neuroblastic Tumors

    Full text link
    [EN] There is growing interest in applying quantitative diffusion techniques to magnetic resonance imaging for cancer diagnosis and treatment. These measurements are used as a surrogate marker of tumor cellularity and aggressiveness, although there may be factors that introduce some bias to these approaches. Thus, we explored a novel methodology based on confidence habitats and voxel uncertainty to improve the power of the apparent diffusion coefficient to discriminate between benign and malignant neuroblastic tumor profiles in children. We were able to show this offered an improved sensitivity and negative predictive value relative to standard voxel-based methodologies. Background/Aim: In recent years, the apparent diffusion coefficient (ADC) has been used in many oncology applications as a surrogate marker of tumor cellularity and aggressiveness, although several factors may introduce bias when calculating this coefficient. The goal of this study was to develop a novel methodology (Fit-Cluster-Fit) based on confidence habitats that could be applied to quantitative diffusion-weighted magnetic resonance images (DWIs) to enhance the power of ADC values to discriminate between benign and malignant neuroblastic tumor profiles in children. Methods: Histogram analysis and clustering-based algorithms were applied to DWIs from 33 patients to perform tumor voxel discrimination into two classes. Voxel uncertainties were quantified and incorporated to obtain a more reproducible and meaningful estimate of ADC values within a tumor habitat. Computational experiments were performed by smearing the ADC values in order to obtain confidence maps that help identify and remove noise from low-quality voxels within high-signal clustered regions. The proposed Fit-Cluster-Fit methodology was compared with two other methods: conventional voxel-based and a cluster-based strategy. Results: The cluster-based and Fit-Cluster-Fit models successfully differentiated benign and malignant neuroblastic tumor profiles when using values from the lower ADC habitat. In particular, the best sensitivity (91%) and specificity (89%) of all the combinations and methods explored was achieved by removing uncertainties at a 70% confidence threshold, improving standard voxel-based sensitivity and negative predictive values by 4% and 10%, respectively. Conclusions: The Fit-Cluster-Fit method improves the performance of imaging biomarkers in classifying pediatric solid tumor cancers and it can probably be adapted to dynamic signal evaluation for any tumor.This study was funded by PRIMAGE (PRedictive In-silico Multiscale Analytics to support cancer personalized diaGnosis and prognosis, empowered by imaging biomarkers), a Horizon 2020 vertical bar RIA project (Topic SC1-DTH-07-2018), grant agreement no: 826494.Cerdà Alberich, L.; Sangüesa Nebot, C.; Alberich-Bayarri, A.; Carot Sierra, JM.; Martinez De Las Heras, B.; Veiga Canuto, D.; Cañete, A.... (2020). A Confidence Habitats Methodology in MR Quantitative Diffusion for the Classification of Neuroblastic Tumors. Cancers. 12(12):1-14. https://doi.org/10.3390/cancers12123858114121

    Parametric timed model checking for guaranteeing timed opacity

    Get PDF
    Information leakage can have dramatic consequences on systems security. Among harmful information leaks, the timing information leakage is the ability for an attacker to deduce internal information depending on the system execution time. We address the following problem: given a timed system, synthesize the execution times for which one cannot deduce whether the system performed some secret behavior. We solve this problem in the setting of timed automata (TAs). We first provide a general solution, and then extend the problem to parametric TAs, by synthesizing internal timings making the TA secure. We study decidability, devise algorithms, and show that our method can also apply to program analysis.Comment: This is the author (and extended) version of the manuscript of the same name published in the proceedings of ATVA 2019. This work is partially supported by the ANR national research program PACS (ANR-14-CE28-0002), the ANR-NRF research program (ProMiS) and by ERATO HASUO Metamathematics for Systems Design Project (No. JPMJER1603), JS

    Experiments with a Convex Polyhedral Analysis Tool for Logic Programs

    Full text link
    Convex polyhedral abstractions of logic programs have been found very useful in deriving numeric relationships between program arguments in order to prove program properties and in other areas such as termination and complexity analysis. We present a tool for constructing polyhedral analyses of (constraint) logic programs. The aim of the tool is to make available, with a convenient interface, state-of-the-art techniques for polyhedral analysis such as delayed widening, narrowing, "widening up-to", and enhanced automatic selection of widening points. The tool is accessible on the web, permits user programs to be uploaded and analysed, and is integrated with related program transformations such as size abstractions and query-answer transformation. We then report some experiments using the tool, showing how it can be conveniently used to analyse transition systems arising from models of embedded systems, and an emulator for a PIC microcontroller which is used for example in wearable computing systems. We discuss issues including scalability, tradeoffs of precision and computation time, and other program transformations that can enhance the results of analysis.Comment: Paper presented at the 17th Workshop on Logic-based Methods in Programming Environments (WLPE2007

    Submicron Systems Architecture Project : Semiannual Technical Report

    Get PDF
    The Mosaic C is an experimental fine-grain multicomputer based on single-chip nodes. The Mosaic C chip includes 64KB of fast dynamic RAM, processor, packet interface, ROM for bootstrap and self-test, and a two-dimensional selftimed router. The chip architecture provides low-overhead and low-latency handling of message packets, and high memory and network bandwidth. Sixty-four Mosaic chips are packaged by tape-automated bonding (TAB) in an 8 x 8 array on circuit boards that can, in turn, be arrayed in two dimensions to build arbitrarily large machines. These 8 x 8 boards are now in prototype production under a subcontract with Hewlett-Packard. We are planning to construct a 16K-node Mosaic C system from 256 of these boards. The suite of Mosaic C hardware also includes host-interface boards and high-speed communication cables. The hardware developments and activities of the past eight months are described in section 2.1. The programming system that we are developing for the Mosaic C is based on the same message-passing, reactive-process, computational model that we have used with earlier multicomputers, but the model is implemented for the Mosaic in a way that supports finegrain concurrency. A process executes only in response to receiving a message, and may in execution send messages, create new processes, and modify its persistent variables before it either exits or becomes dormant in preparation for receiving another message. These computations are expressed in an object-oriented programming notation, a derivative of C++ called C+-. The computational model and the C+- programming notation are described in section 2.2. The Mosaic C runtime system, which is written in C+-, provides automatic process placement and highly distributed management of system resources. The Mosaic C runtime system is described in section 2.3

    Learning and comparing functional connectomes across subjects

    Get PDF
    Functional connectomes capture brain interactions via synchronized fluctuations in the functional magnetic resonance imaging signal. If measured during rest, they map the intrinsic functional architecture of the brain. With task-driven experiments they represent integration mechanisms between specialized brain areas. Analyzing their variability across subjects and conditions can reveal markers of brain pathologies and mechanisms underlying cognition. Methods of estimating functional connectomes from the imaging signal have undergone rapid developments and the literature is full of diverse strategies for comparing them. This review aims to clarify links across functional-connectivity methods as well as to expose different steps to perform a group study of functional connectomes
    corecore