37 research outputs found

    An abstract machine for the execution of graph grammars

    Get PDF
    An abstract machine for graph rewriting is the central part of the middle layer of the implementation of a grammar based graph rewriting system. It specifies the interface between a compiler for graph grammars and a system performing actual graph transformations. By the introduction of a middle layer, the analysis of the given graph grammar can be used to optimize its execution. The costs of expensive analysis are thus shifted from run to compile time. Each implementation of the abstract machine can optimize the utilization of available hardware. We give the specification of the state and the instruction set of the abstract machine. For an example grammar we show how compile time analysis can reduce execution time, and we present code generation rules to implement a grammar on the abstract machine. In comparison to abstract machines, well-known from the implementation of functional languages, our machine can execute rewriting specified by graph grammars which is far more general than graph reduction. The abstract machine for graph rewriting is part of a project which addresses the efficient implementation of the execution of graph grammars

    Framework and baseline examination of the German National Cohort (NAKO)

    Get PDF
    The German National Cohort (NAKO) is a multidisciplinary, population-based prospective cohort study that aims to investigate the causes of widespread diseases, identify risk factors and improve early detection and prevention of disease. Specifically, NAKO is designed to identify novel and better characterize established risk and protection factors for the development of cardiovascular diseases, cancer, diabetes, neurodegenerative and psychiatric diseases, musculoskeletal diseases, respiratory and infectious diseases in a random sample of the general population. Between 2014 and 2019, a total of 205,415 men and women aged 19–74 years were recruited and examined in 18 study centres in Germany. The baseline assessment included a face-to-face interview, self-administered questionnaires and a wide range of biomedical examinations. Biomaterials were collected from all participants including serum, EDTA plasma, buffy coats, RNA and erythrocytes, urine, saliva, nasal swabs and stool. In 56,971 participants, an intensified examination programme was implemented. Whole-body 3T magnetic resonance imaging was performed in 30,861 participants on dedicated scanners. NAKO collects follow-up information on incident diseases through a combination of active follow-up using self-report via written questionnaires at 2–3 year intervals and passive follow-up via record linkages. All study participants are invited for re-examinations at the study centres in 4–5 year intervals. Thereby, longitudinal information on changes in risk factor profiles and in vascular, cardiac, metabolic, neurocognitive, pulmonary and sensory function is collected. NAKO is a major resource for population-based epidemiology to identify new and tailored strategies for early detection, prediction, prevention and treatment of major diseases for the next 30 years. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s10654-022-00890-5

    Progression of conventional cardiovascular risk factors and vascular disease risk in individuals: insights from the PROG-IMT consortium

    Get PDF
    Aims: Averaged measurements, but not the progression based on multiple assessments of carotid intima-media thickness, (cIMT) are predictive of cardiovascular disease (CVD) events in individuals. Whether this is true for conventional risk factors is unclear. Methods and results: An individual participant meta-analysis was used to associate the annualised progression of systolic blood pressure, total cholesterol, low-density lipoprotein cholesterol and high-density lipoprotein cholesterol with future cardiovascular disease risk in 13 prospective cohort studies of the PROG-IMT collaboration (n = 34,072). Follow-up data included information on a combined cardiovascular disease endpoint of myocardial infarction, stroke, or vascular death. In secondary analyses, annualised progression was replaced with average. Log hazard ratios per standard deviation difference were pooled across studies by a random effects meta-analysis. In primary analysis, the annualised progression of total cholesterol was marginally related to a higher cardiovascular disease risk (hazard ratio (HR) 1.04, 95% confidence interval (CI) 1.00 to 1.07). The annualised progression of systolic blood pressure, low-density lipoprotein cholesterol and high-density lipoprotein cholesterol was not associated with future cardiovascular disease risk. In secondary analysis, average systolic blood pressure (HR 1.20 95% CI 1.11 to 1.29) and low-density lipoprotein cholesterol (HR 1.09, 95% CI 1.02 to 1.16) were related to a greater, while high-density lipoprotein cholesterol (HR 0.92, 95% CI 0.88 to 0.97) was related to a lower risk of future cardiovascular disease events. Conclusion: Averaged measurements of systolic blood pressure, low-density lipoprotein cholesterol and high-density lipoprotein cholesterol displayed significant linear relationships with the risk of future cardiovascular disease events. However, there was no clear association between the annualised progression of these conventional risk factors in individuals with the risk of future clinical endpoints

    Predictive value for cardiovascular events of common carotid intima media thickness and its rate of change in individuals at high cardiovascular risk - Results from the PROG-IMT collaboration.

    Get PDF
    AIMS: Carotid intima media thickness (CIMT) predicts cardiovascular (CVD) events, but the predictive value of CIMT change is debated. We assessed the relation between CIMT change and events in individuals at high cardiovascular risk. METHODS AND RESULTS: From 31 cohorts with two CIMT scans (total n = 89070) on average 3.6 years apart and clinical follow-up, subcohorts were drawn: (A) individuals with at least 3 cardiovascular risk factors without previous CVD events, (B) individuals with carotid plaques without previous CVD events, and (C) individuals with previous CVD events. Cox regression models were fit to estimate the hazard ratio (HR) of the combined endpoint (myocardial infarction, stroke or vascular death) per standard deviation (SD) of CIMT change, adjusted for CVD risk factors. These HRs were pooled across studies. In groups A, B and C we observed 3483, 2845 and 1165 endpoint events, respectively. Average common CIMT was 0.79mm (SD 0.16mm), and annual common CIMT change was 0.01mm (SD 0.07mm), both in group A. The pooled HR per SD of annual common CIMT change (0.02 to 0.43mm) was 0.99 (95% confidence interval: 0.95-1.02) in group A, 0.98 (0.93-1.04) in group B, and 0.95 (0.89-1.04) in group C. The HR per SD of common CIMT (average of the first and the second CIMT scan, 0.09 to 0.75mm) was 1.15 (1.07-1.23) in group A, 1.13 (1.05-1.22) in group B, and 1.12 (1.05-1.20) in group C. CONCLUSIONS: We confirm that common CIMT is associated with future CVD events in individuals at high risk. CIMT change does not relate to future event risk in high-risk individuals

    Automatic identification of variables in epidemiological datasets using logic regression

    Get PDF
    textabstractBackground: For an individual participant data (IPD) meta-analysis, multiple datasets must be transformed in a consistent format, e.g. using uniform variable names. When large numbers of datasets have to be processed, this can be a time-consuming and error-prone task. Automated or semi-automated identification of variables can help to reduce the workload and improve the data quality. For semi-automation high sensitivity in the recognition of matching variables is particularly important, because it allows creating software which for a target variable presents a choice of source variables, from which a user can choose the matching one, with only low risk of having missed a correct source variable. Methods: For each variable in a set of target variables, a number of simple rules were manually created. With logic regression, an optimal Boolean combination of these rules was searched for every target variable, using a random subset of a large database of epidemiological and clinical cohort data (construction subset). In a second subset of this database (validation subset), this optimal combination rules were validated. Results: In the construction sample, 41 target variables were allocated on average with a positive predictive value (PPV) of 34%, and a negative predictive value (NPV) of 95%. In the validation sample, PPV was 33%, whereas NPV remained at 94%. In the construction sample, PPV was 50% or less in 63% of all variables, in the validation sample in 71% of all variables. Conclusions: We demonstrated that the application of logic regression in a complex data management task in large epidemiological IPD meta-analyses is feasible. However, the performance of the algorithm is poor, which may require backup strategies

    Bypass Strong V-Structures and Find an Isomorphic Labelled Subgraph in Linear Time

    Get PDF
    This paper identifies a condition for which the existence of an isomorphic subgraph can b

    Monitoring with Graph-Grammars as formal operational Models

    No full text
    Machine (PAM, [LKI89], [Kuc89], [Loo90]). This machine was designed and implemented as a software prototype at the RheinischWestf Àlische Technische Hochschule (RWTH) Aachen. A machine performing graph reduction evaluates a functional expression according to a set of given function definitions. In pure functional languages multiple occurrences of subexpressions need to be evaluated just once. Therefore the subtrees representing the common subexpressions can be identified and the abstract syntax tree of the functional expression is transformed to a graph. During the evaluation the machine holds the expression together with bookkeeping information in its memory. Since a graph reduction machine uses graphs as the main data structure, it is a very suitable application for a graph-grammar model. Several other projects have designed and implemented parallel graph reduction machines for example ALICE ([DCFHR87]), or GRIP ([HamPey90],[PCSH87]). So there is a broad application area of monitorin..

    Requirements to a Framework for Sustainable Integration of System Development Tools

    No full text
    Tool integration is still lacking appropriate solutions. For an integration project at DaimlerChrysler a number of requirements have been identified which shall lead to a sustainable integration framework being open to further evolution. The requirements are based on a number of principles and cover several levels: user, system, architectural, and realisation requirements. Having these requirements in mind, a first prototype has been successfully developed which is open to further evolution

    Introductory paper

    No full text

    Graph Transformations for Model-Based Testing

    No full text
    corecore