80 research outputs found
Being correct is not enough: efficient verification using robust linear temporal logic
While most approaches in formal methods address system correctness, ensuring
robustness has remained a challenge. In this paper we present and study the
logic rLTL which provides a means to formally reason about both correctness and
robustness in system design. Furthermore, we identify a large fragment of rLTL
for which the verification problem can be efficiently solved, i.e.,
verification can be done by using an automaton, recognizing the behaviors
described by the rLTL formula , of size at most , where is the length of . This
result improves upon the previously known bound of
for rLTL verification and is closer to
the LTL bound of . The usefulness of
this fragment is demonstrated by a number of case studies showing its practical
significance in terms of expressiveness, the ability to describe robustness,
and the fine-grained information that rLTL brings to the process of system
verification. Moreover, these advantages come at a low computational overhead
with respect to LTL verification.Comment: arXiv admin note: text overlap with arXiv:1510.08970. v2 notes: Proof
on the complexity of translating rLTL formulae to LTL formulae via the
rewriting approach. New case study on the scalability of rLTL formulae in the
proposed fragment. Accepted to appear in ACM Transactions on Computational
Logi
Predicting survival in malignant pleural effusion: development and validation of the LENT prognostic score
BACKGROUND: Malignant pleural effusion (MPE) causes debilitating breathlessness and predicting survival is challenging. This study aimed to obtain contemporary data on survival by underlying tumour type in patients with MPE, identify prognostic indicators of overall survival and develop and validate a prognostic scoring system. METHODS: Three large international cohorts of patients with MPE were used to calculate survival by cell type (univariable Cox model). The prognostic value of 14 predefined variables was evaluated in the most complete data set (multivariable Cox model). A clinical prognostic scoring system was then developed and validated. RESULTS: Based on the results of the international data and the multivariable survival analysis, the LENT prognostic score (pleural fluid lactate dehydrogenase, Eastern Cooperative Oncology Group (ECOG) performance score (PS), neutrophil-to-lymphocyte ratio and tumour type) was developed and subsequently validated using an independent data set. Risk stratifying patients into low-risk, moderate-risk and high-risk groups gave median (IQR) survivals of 319 days (228–549; n=43), 130 days (47–467; n=129) and 44 days (22–77; n=31), respectively. Only 65% (20/31) of patients with a high-risk LENT score survived 1 month from diagnosis and just 3% (1/31) survived 6 months. Analysis of the area under the receiver operating curve revealed the LENT score to be superior at predicting survival compared with ECOG PS at 1 month (0.77 vs 0.66, p<0.01), 3 months (0.84 vs 0.75, p<0.01) and 6 months (0.85 vs 0.76, p<0.01). CONCLUSIONS: The LENT scoring system is the first validated prognostic score in MPE, which predicts survival with significantly better accuracy than ECOG PS alone. This may aid clinical decision making in this diverse patient population
Use of interrupter technique in assessment of bronchial responsiveness in normal subjects
BACKGROUND: A number of subjects, especially the very young and the elderly, are unable to cooperate and to perform forced expiratory manoeuvres in the evaluation of bronchial hyperresponsiveness (BHR). The objective of our study was to investigate the use of the interrupter technique as a method to measure the response to provocation and to compare it with the conventional PD(20 )FEV(1). METHODS: We studied 170 normal subjects, 100 male and 70 female (mean ± SD age, 38 ± 8.5 and 35 ± 7.5 years, respectively), non-smoking from healthy families. These subjects had no respiratory symptoms, rhinitis or atopic history. A dosimetric cumulative inhalation of methacholine was used and the response was measured by the dose which increases baseline end interruption resistance by 100% (PD(100)Rint, EI) as well as by percent dose response ratio (DRR). RESULTS: BHR at a cut-off level of 0.8 mg methacholine exhibited 31 (18%) of the subjects (specificity 81.2%), 21 male and 10 female, while 3% showed a response in the asthmatic range. The method was reproducible and showed good correlation with PD(20)FEV(1 )(r = 0.76, p < 0.005), with relatively narrow limits of agreement at -1.39 μmol and 1.27 μmol methacholine, respectively, but the interrupter methodology proved more sensitive than FEV(1 )in terms of reactivity (DRR). CONCLUSIONS: Interrupter methodology is clinically useful and may be used to evaluate bronchial responsiveness in normal subjects and in situations when forced expirations cannot be performed
Etiologic Diagnosis of Lower Respiratory Tract Bacterial Infections Using Sputum Samples and Quantitative Loop-Mediated Isothermal Amplification
Etiologic diagnoses of lower respiratory tract infections (LRTI) have been relying primarily on bacterial cultures that often fail to return useful results in time. Although DNA-based assays are more sensitive than bacterial cultures in detecting pathogens, the molecular results are often inconsistent and challenged by doubts on false positives, such as those due to system- and environment-derived contaminations. Here we report a nationwide cohort study on 2986 suspected LRTI patients across P. R. China. We compared the performance of a DNA-based assay qLAMP (quantitative Loop-mediated isothermal AMPlification) with that of standard bacterial cultures in detecting a panel of eight common respiratory bacterial pathogens from sputum samples. Our qLAMP assay detects the panel of pathogens in 1047(69.28%) patients from 1533 qualified patients at the end. We found that the bacterial titer quantified based on qLAMP is a predictor of probability that the bacterium in the sample can be detected in culture assay. The relatedness of the two assays fits a logistic regression curve. We used a piecewise linear function to define breakpoints where latent pathogen abruptly change its competitive relationship with others in the panel. These breakpoints, where pathogens start to propagate abnormally, are used as cutoffs to eliminate the influence of contaminations from normal flora. With help of the cutoffs derived from statistical analysis, we are able to identify causative pathogens in 750 (48.92%) patients from qualified patients. In conclusion, qLAMP is a reliable method in quantifying bacterial titer. Despite the fact that there are always latent bacteria contaminated in sputum samples, we can identify causative pathogens based on cutoffs derived from statistical analysis of competitive relationship
A Study of Page-Based Memory Allocation Policies for the Argo Distributed Shared Memory System
Software distributed shared memory (DSM) systems have been one of the main areas of research in the high-performance computing community. One of the many implementations of such systems is Argo, a page-based, user-space DSM, built on top of MPI. Researchers have dedicated considerable effort in making Argo easier to use and alleviate some of its shortcomings that are culprits in hurting performance and scaling. However, there are several issues left to be addressed, one of them concerning the simplistic distribution of pages across the nodes of a cluster. Since Argo works on page granularity, the page-based memory allocation or placement of pages in a distributedsystem is of significant importance to the performance, since it determines the extent of remote memory accesses. To ensure high performance, it is essential to employ memory allocation policies that allocate data in distributed memory modules intelligently, thus reducing latencies and increasing memory bandwidth. In this thesis,we incorporate several page placement policies on Argo and evaluate their impact on performance with a set of benchmarks ported on that programming model
Recommended from our members
A mithrilian approach to safety and robustness of autonomous cyber-physical systems
Every engineer dreams of and strives for the day that more and more aspects of the daily life become smart, efficient, and automated. Smart grids, smart cities, mobility on demand and autonomous vehicles, even medical devices are a few domains with already significant progress towards a fully autonomous future. In such an increasingly autonomous world, guaranteeing safety and correctness of design and implementation of Cyber-Physical Systems (CPSs) is constantly under the spotlight. There is no room for error when a system interacts with and affects human lives. Consequently, we want to design systems that are safe and correct even when interacting with unknown, changing environments. We want the algorithms designing those systems to be efficient upon execution and provide formal guarantees about safety and correctness. In other words, they should act as a lightweight and impenetrable armor for the system. Such armors are found in the work of the father of modern fiction literature, J.R.R.Tolkien. He conceived in his work the metal mithril, and mithril armors are described as ``light and yet harder than tempered steel''. Inspired by this, the approaches presented in this dissertation are termed mithrilian as they are characterized by computational efficiency and provide formal safety and robustness guarantees. More specifically, this dissertation discusses formal methods in control systems and is separated in two parts: 1. The first part concerns the synthesis of safety controllers via Robust Controlled Invariant Sets (RCISs). A safety enforcing controller keeps the state of the system within a set of safe states notwithstanding the presence of uncertainties. Using RCISs to design safety controllers is natural, as any trajectory starting within these sets, can always be forced to remain therein. On these grounds, we revisit the problem of computing RCISs for discrete-time linear systems. Departing from previous approaches, we consider implicit, rather than explicit, representations for controlled invariant sets. Moreover, by considering such representations in the space of states and finite input sequences we obtain \emph{closed-form} expressions for controlled invariant sets. An immediate advantage is the ability to handle high-dimensional systems since the closed-form expression is computed in a single step rather than iteratively. We present thorough case studies illustrating that in safety-critical scenarios the implicit representation suffices in place of the explicit RCIS. The proposed method is complete in the absence of disturbances and we provide a weak completeness result when disturbances are present.
2. In the second part, we switch gears and consider the problem of guaranteeing robustness in system design. Robustness is introduced in system verification by robust Linear Temporal Logic (rLTL). As CPSs inevitably become increasingly complex, the ability to completely guarantee correctness of their design and implementation via exhaustive testing fades. Most work in formal methods has focused on system correctness, i.e., in ensuring that systems are guaranteed to meet their design specifications. We argue that correctness is necessary, but not sufficient for a good design when a reactive system interacts with an ever-changing uncontrolled environment.Thus, in addition to correctness, systems should also be designed to be robust, i.e., small deviations from the assumptions made at design time should lead to, at most, small violations of the design specifications. The contribution here lies in refining the current complexity upper bound of rLTL verification and developing a tool for rLTL verification. More specifically, we identify a large fragment of rLTL for which the verification problem can be efficiently solved, i.e., verification can be done by using an automaton, recognizing the behaviors described by the rLTL formula , of size at most \bigO \left( 3^{ |\varphi|} \right), where is the length of . This result improves upon the previously known bound of \bigO \left(5^{|\varphi|} \right) and is closer to the LTL bound of \bigO \left( 2^{|\varphi|} \right). The usefulness of this fragment is demonstrated by a number of case studies showing its practical significance in terms of expressiveness, the ability to describe robustness, and the fine-grained information that rLTL brings to the process of system verification. Moreover, these advantages come at a low computational overhead with respect to LTL verification. To perform rLTL verification, we developed Evrostos, the first tool for model-checking rLTL formulas in this fragment
A Study of Page-Based Memory Allocation Policies for the Argo Distributed Shared Memory System
Software distributed shared memory (DSM) systems have been one of the main areas of research in the high-performance computing community. One of the many implementations of such systems is Argo, a page-based, user-space DSM, built on top of MPI. Researchers have dedicated considerable effort in making Argo easier to use and alleviate some of its shortcomings that are culprits in hurting performance and scaling. However, there are several issues left to be addressed, one of them concerning the simplistic distribution of pages across the nodes of a cluster. Since Argo works on page granularity, the page-based memory allocation or placement of pages in a distributedsystem is of significant importance to the performance, since it determines the extent of remote memory accesses. To ensure high performance, it is essential to employ memory allocation policies that allocate data in distributed memory modules intelligently, thus reducing latencies and increasing memory bandwidth. In this thesis,we incorporate several page placement policies on Argo and evaluate their impact on performance with a set of benchmarks ported on that programming model
- …