368 research outputs found
A Unified Approach for the Integration of Distributed Heterogeneous Software Components
Proceedings of the 2001 Monterey Workshop (Sponsored by DARPA, ONR, ARO and AFOSR), pp: 109-119, Monterey, CA, 2001Distributed systems are omnipresent these days. Creating efficient and robust software for such systems is a highly complex task. One possible approach to developing distributed software is based on the integration of heterogeneous sofwtare components that are scattered across many machines. In this paper, a comprehensive framework that will allow a seamless integration of distributed heterogeneous software components is proposed. This framework involves: a) a metamodel for components and associated hierarchical setup for indicating the contracts and constraints of the components. b) an automatic generation of glues and wrappers, based on a designer's specifications, for achieving interoperability, c) a formal mechanism for precisely describing the meta-model, and d) a formalization of quality of service (QoS) offered by each component and ensemble of components. A case study from the domain of distributed information filtering is described in the context of this framework.This material is based upon work supported by, or in part by, the U.S. Office of Naval Research under award number N00014-01-1-0746. This material is based upon work supported by, or in part by, the U.S. Army Research Laboratory and the U.S. Army Research Office under contract/grant number 40473-MA
Challenges and Directions in Formalizing the Semantics of Modeling Languages
Developing software from models is a growing practice and there exist many model-based tools (e.g., editors, interpreters, debuggers, and simulators) for supporting model-driven engineering. Even though these tools facilitate the automation of software engineering tasks and activities, such tools are typically engineered manually. However, many of these tools have a common semantic foundation centered around an underlying modeling language, which would make it possible to automate their development if the modeling language specification were formalized. Even though there has been much work in formalizing programming languages, with many successful tools constructed using such formalisms, there has been little work in formalizing modeling languages for the purpose of automation. This paper discusses possible semantics-based approaches for the formalization of modeling languages and describes how this formalism may be used to automate the construction of modeling tools
New results on rewrite-based satisfiability procedures
Program analysis and verification require decision procedures to reason on
theories of data structures. Many problems can be reduced to the satisfiability
of sets of ground literals in theory T. If a sound and complete inference
system for first-order logic is guaranteed to terminate on T-satisfiability
problems, any theorem-proving strategy with that system and a fair search plan
is a T-satisfiability procedure. We prove termination of a rewrite-based
first-order engine on the theories of records, integer offsets, integer offsets
modulo and lists. We give a modularity theorem stating sufficient conditions
for termination on a combinations of theories, given termination on each. The
above theories, as well as others, satisfy these conditions. We introduce
several sets of benchmarks on these theories and their combinations, including
both parametric synthetic benchmarks to test scalability, and real-world
problems to test performances on huge sets of literals. We compare the
rewrite-based theorem prover E with the validity checkers CVC and CVC Lite.
Contrary to the folklore that a general-purpose prover cannot compete with
reasoners with built-in theories, the experiments are overall favorable to the
theorem prover, showing that not only the rewriting approach is elegant and
conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page
A Novel Adaptation of a Parent-Child Observational Assessment Tool for Appraisals and Coping in Children Exposed to Acute Trauma
Background: Millions of children worldwide are exposed to acute potentially traumatic events (PTEs) annually. Many children and their families experience significant emotional distress and/or functional impairment following PTEs. While current research has begun to highlight a role for early appraisals and coping in promoting or preventing full recovery from PTEs, the exact nature of the relationships among appraisals, coping, and traumatic stress reactions as well as how appraisals and coping behaviors are influenced by the child\u27s environment (e.g., parents) remains unclear; assessment tools that reach beyond self-report are needed to improve this understanding.
Objective: The objective of the current study is to describe the newly created Trauma Ambiguous Situations Tool (TAST; i.e., an observational child–parent interview and discussion task that allows assessment of appraisals, coping, and parent–child processes) and to report on initial feasibility and validation of TAST implemented with child–parent dyads in which children were exposed to a PTE.
Method: As part of a larger study on the role of biopsychosocial factors in posttraumatic stress reactions, children (aged 8–13) and parents (n=25 child–parent dyads) completed the TAST during the child\u27s hospitalization for injury.
Results: Children and parents engaged well with the TAST. The time to administer the TAST was feasible, even in a peri-trauma context. The TAST solicited a wide array of appraisals (threat and neutral) and coping solutions (proactive and avoidant). Forced-choice and open-ended appraisal assessments provided unique information. The parent–child discussion portion of the TAST allowed for direct observation of parent–child processes and demonstrated parental influence on children\u27s appraisals and coping solutions.
Conclusions: The TAST is a promising new research tool, which may help to explicate how parents influence their child\u27s developing appraisals and coping solutions following a PTE. More research should examine the relationships of appraisals, coping, and parent–child processes assessed by the TAST with traumatic stress outcomes
Negative Predictive Value of Multiparametric Magnetic Resonance Imaging in the Detection of Clinically Significant Prostate Cancer in the Prostate Imaging Reporting and Data System Era: A Systematic Review and Meta-analysis
CONTEXT: Prebiopsy multiparametric magnetic resonance imaging (mpMRI) is increasingly used in prostate cancer diagnosis. The reported negative predictive value (NPV) of mpMRI is used by some clinicians to aid in decision making about whether or not to proceed to biopsy. OBJECTIVE: We aim to perform a contemporary systematic review that reflects the latest literature on optimal mpMRI techniques and scoring systems to update the NPV of mpMRI for clinically significant prostate cancer (csPCa). EVIDENCE ACQUISITION: We conducted a systematic literature search and included studies from 2016 to September 4, 2019, which assessed the NPV of mpMRI for csPCa, using biopsy or clinical follow-up as the reference standard. To ensure that studies included in this analysis reflect contemporary practice, we only included studies in which mpMRI findings were interpreted according to the Prostate Imaging Reporting and Data System (PIRADS) or similar Likert grading system. We define negative mpMRI as either (1) PIRADS/Likert 1-2 or (2) PIRADS/Likert 1-3; csPCa was defined as either (1) Gleason grade group ≥2 or (2) Gleason grade group ≥3. We calculated NPV separately for each combination of negative mpMRI and csPCa. EVIDENCE SYNTHESIS: A total of 42 studies with 7321 patients met our inclusion criteria and were included for analysis. Using definition (1) for negative mpMRI and csPCa, the pooled NPV for biopsy-naïve men was 90.8% (95% confidence interval [CI] 88.1-93.1%). When defining csPCa using definition (2), the NPV for csPCa was 97.1% (95% CI 94.9-98.7%). Calculation of the pooled NPV using definition (2) for negative mpMRI and definition (1) for csPCa yielded the following: 86.8% (95% CI 80.1-92.4%). Using definition (2) for both negative mpMRI and csPCa, the pooled NPV from two studies was 96.1% (95% CI 93.4-98.2%). CONCLUSIONS: Multiparametric MRI of the prostate is generally an accurate test for ruling out csPCa. However, we observed heterogeneity in the NPV estimates, and local institutional data should form the basis of decision making if available. PATIENT SUMMARY: The negative predictive values should assist in decision making for clinicians considering not proceeding to biopsy in men with elevated age-specific prostate-specific antigen and multiparametric magnetic resonance imaging reported as negative (or equivocal) on Prostate Imaging Reporting and Data System/Likert scoring. Some 7-10% of men, depending on the setting, will miss a diagnosis of clinically significant cancer if they do not proceed to biopsy. Given the institutional variation in results, it is of upmost importance to base decision making on local data if available
Re-examination of the Controversial Coexistence of Traumatic Brain Injury and Posttraumatic Stress Disorder: Misdiagnosis and Self-Report Measures
The coexistence of traumatic brain injury (TBI) and posttraumatic stress disorder (PTSD) remains a controversial issue in the literature. To address this controversy, we focused primarily on the civilian-related literature of TBI and PTSD. Some investigators have argued that individuals who had been rendered unconscious or suffered amnesia due to a TBI are unable to develop PTSD because they would be unable to consciously experience the symptoms of fear, helplessness, and horror associated with the development of PTSD. Other investigators have reported that individuals who sustain TBI, regardless of its severity, can develop PTSD even in the context of prolonged unconsciousness. A careful review of the methodologies employed in these studies reveals that investigators who relied on clinical interviews of TBI patients to diagnose PTSD found little or no evidence of PTSD. In contrast, investigators who relied on PTSD questionnaires to diagnose PTSD found considerable evidence of PTSD. Further analysis revealed that many of the TBI patients who were initially diagnosed with PTSD according to self-report questionnaires did not meet the diagnostic criteria for PTSD upon completion of a clinical interview. In particular, patients with severe TBI were often misdiagnosed with PTSD. A number of investigators found that many of the severe TBI patients failed to follow the questionnaire instructions and erroneously endorsed PTSD symptoms because of their cognitive difficulties. Because PTSD questionnaires are not designed to discriminate between PTSD and TBI symptoms or determine whether a patient's responses are accurate or exaggerated, studies that rely on self-report questionnaires to evaluate PTSD in TBI patients are at risk of misdiagnosing PTSD. Further research should evaluate the degree to which misdiagnosis of PTSD occurs in individuals who have sustained mild TBI
Phylogeography of Japanese encephalitis virus:genotype is associated with climate
The circulation of vector-borne zoonotic viruses is largely determined by the overlap in the geographical distributions of virus-competent vectors and reservoir hosts. What is less clear are the factors influencing the distribution of virus-specific lineages. Japanese encephalitis virus (JEV) is the most important etiologic agent of epidemic encephalitis worldwide, and is primarily maintained between vertebrate reservoir hosts (avian and swine) and culicine mosquitoes. There are five genotypes of JEV: GI-V. In recent years, GI has displaced GIII as the dominant JEV genotype and GV has re-emerged after almost 60 years of undetected virus circulation. JEV is found throughout most of Asia, extending from maritime Siberia in the north to Australia in the south, and as far as Pakistan to the west and Saipan to the east. Transmission of JEV in temperate zones is epidemic with the majority of cases occurring in summer months, while transmission in tropical zones is endemic and occurs year-round at lower rates. To test the hypothesis that viruses circulating in these two geographical zones are genetically distinct, we applied Bayesian phylogeographic, categorical data analysis and phylogeny-trait association test techniques to the largest JEV dataset compiled to date, representing the envelope (E) gene of 487 isolates collected from 12 countries over 75 years. We demonstrated that GIII and the recently emerged GI-b are temperate genotypes likely maintained year-round in northern latitudes, while GI-a and GII are tropical genotypes likely maintained primarily through mosquito-avian and mosquito-swine transmission cycles. This study represents a new paradigm directly linking viral molecular evolution and climate
- …