219 research outputs found

    A Web Service Composition Method Based on OpenAPI Semantic Annotations

    Full text link
    Automatic Web service composition is a research direction aimed to improve the process of aggregating multiple Web services to create some new, specific functionality. The use of semantics is required as the proper semantic model with annotation standards is enabling the automation of reasoning required to solve non-trivial cases. Most previous models are limited in describing service parameters as concepts of a simple hierarchy. Our proposed method is increasing the expressiveness at the parameter level, using concept properties that define attributes expressed by name and type. Concept properties are inherited. The paper also describes how parameters are matched to create, in an automatic manner, valid compositions. Additionally, the composition algorithm is practically used on descriptions of Web services implemented by REST APIs expressed by OpenAPI specifications. Our proposal uses knowledge models (ontologies) to enhance these OpenAPI constructs with JSON-LD semantic annotations in order to obtain better compositions for involved services. We also propose an adjusted composition algorithm that extends the semantic knowledge defined by our model.Comment: International Conference on e-Business Engineering (ICEBE) 9 page

    Geochemical characterization of oceanic basalts using Artificial Neural Network

    Get PDF
    The geochemical discriminate diagrams help to distinguish the volcanics recovered from different tectonic settings but these diagrams tend to group the ocean floor basalts (OFB) under one class i.e., as mid-oceanic ridge basalts (MORB). Hence, a method is specifically needed to identify the OFB as normal (N-MORB), enriched (E-MORB) and ocean island basalts (OIB)

    Frequency of human immunodeficiency virus (HIV) testing in urban vs. rural areas of the United States: Results from a nationally-representative sample

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Studies in the United States show that rural persons with HIV are more likely than their urban counterparts to be diagnosed at a late stage of infection, suggesting missed opportunities for HIV testing in rural areas. To inform discussion of HIV testing policies in rural areas, we generated nationally representative, population-based estimates of HIV testing frequencies in urban vs. rural areas of the United States.</p> <p>Methods</p> <p>Secondary analysis of 2005 and 2009 Behavioral Risk Factor Surveillance System (BRFSS) data. Dependent variables were self-reported lifetime and past-year HIV testing. Urban vs. rural residence was determined using the metropolitan area framework and Urban Influence Codes and was categorized as 1) metropolitan, center city (the most urban); 2) metropolitan, other; 3) non-metropolitan, adjacent to metropolitan; 4) non-metropolitan, micropolitan; and 4) remote, non-metropolitan (the most rural).</p> <p>Results</p> <p>The 2005 sample included 257,895 respondents. Lifetime HIV testing frequencies ranged from 43.6% among persons residing in the most urban areas to 32.2% among persons in the most rural areas (P < 0.001). Past-year testing frequencies ranged from 13.5% to 7.3% in these groups (P < 0.001). After adjusting for demographics (age, sex, race/ethnicity, and region of residence) and self-reported HIV risk factors, persons in the most remote rural areas were substantially less likely than persons in the most urban areas to report HIV testing in the past year (odds ratio 0.65, 95% CI 0.57-0.75). Testing rates in urban and rural areas did not change substantively following the 2006 Centers for Disease Control and Prevention recommendation for routine, population-based HIV testing in healthcare settings. In metropolitan (urban) areas, 11.5% (95% CI 11.2-11.8) reported past-year HIV testing in 2005 vs. 11.4% (95% CI 11.1%-11.7%) in 2009 (P = 0.93). In non-metropolitan areas, 8.7% (95% CI 8.2%-9.2%) were tested in 2005 vs. 7.7% (95% CI 7.2%-8.2%) in 2009 (P = 0.03).</p> <p>Conclusions</p> <p>Rural persons are less likely than urban to report prior HIV testing, which may contribute to later HIV diagnosis in rural areas. There is need to consider strategies to increase HIV testing in rural areas.</p

    Developing adaptive control:Age-related differences in task choices and awareness of proactive and reactive control demands

    Get PDF
    Developmental changes in executive function are often explained in terms of core cognitive processes and associated neural substrates. For example, younger children tend to engage control reactively in the moment as needed, whereas older children increasingly engage control proactively, in anticipation of needing it. Such developments may reflect increasing capacities for active maintenance dependent upon dorsolateral prefrontal cortex. However, younger children will engage proactive control when reactive control is made more difficult, suggesting that developmental changes may also reflect decisions about whether to engage control, and how. We tested awareness of temporal control demands and associated task choices in 5-year-olds and 10-year-olds and adults using a demand selection task. Participants chose between one task that enabled proactive control and another task that enabled reactive control. Adults reported awareness of these different control demands and preferentially played the proactive task option. Ten-year-olds reported awareness of control demands but selected task options at chance. Five-year-olds showed neither awareness nor task preference, but a subsample who exhibited awareness of control demands preferentially played the reactive task option, mirroring their typical control mode. Thus, developmental improvements in executive function may in part reflect better awareness of cognitive demands and adaptive behavior, which may in turn reflect changes in dorsal anterior cingulate in signaling task demands to lateral prefrontal cortex

    An mRNA decapping mutant deficient in P body assembly limits mRNA stabilization in response to osmotic stress

    Get PDF
    Yeast is exposed to changing environmental conditions and must adapt its genetic program to provide a homeostatic intracellular environment. An important stress for yeast in the wild is high osmolarity. A key response to this stress is increased mRNA stability primarily by the inhibition of deadenylation. We previously demonstrated that mutations in decapping activators (edc3∆ lsm4∆C), which result in defects in P body assembly, can destabilize mRNA under unstressed conditions. We wished to examine whether mRNA would be destabilized in the edc3∆ lsm4∆C mutant as compared to the wild-type in response to osmotic stress, when P bodies are intense and numerous. Our results show that the edc3∆ lsm4∆C mutant limits the mRNA stability in response to osmotic stress, while the magnitude of stabilization was similar as compared to the wild-type. The reduced mRNA stability in the edc3∆ lsm4∆C mutant was correlated with a shorter PGK1 poly(A) tail. Similarly, the MFA2 mRNA was more rapidly deadenylated as well as significantly stabilized in the ccr4∆ deadenylation mutant in the edc3∆ lsm4∆C background. These results suggest a role for these decapping factors in stabilizing mRNA and may implicate P bodies as sites of reduced mRNA degradation

    Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks, 1990-2015: a systematic analysis for the Global Burden of Disease Study 2015

    Get PDF
    SummaryBackground The Global Burden of Diseases, Injuries, and Risk Factors Study 2015 provides an up-to-date synthesis of the evidence for risk factor exposure and the attributable burden of disease. By providing national and subnational assessments spanning the past 25 years, this study can inform debates on the importance of addressing risks in context. Methods We used the comparative risk assessment framework developed for previous iterations of the Global Burden of Disease Study to estimate attributable deaths, disability-adjusted life-years (DALYs), and trends in exposure by age group, sex, year, and geography for 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks from 1990 to 2015. This study included 388 risk-outcome pairs that met World Cancer Research Fund-defined criteria for convincing or probable evidence. We extracted relative risk and exposure estimates from randomised controlled trials, cohorts, pooled cohorts, household surveys, census data, satellite data, and other sources. We used statistical models to pool data, adjust for bias, and incorporate covariates. We developed a metric that allows comparisons of exposure across risk factors—the summary exposure value. Using the counterfactual scenario of theoretical minimum risk level, we estimated the portion of deaths and DALYs that could be attributed to a given risk. We decomposed trends in attributable burden into contributions from population growth, population age structure, risk exposure, and risk-deleted cause-specific DALY rates. We characterised risk exposure in relation to a Socio-demographic Index (SDI). Findings Between 1990 and 2015, global exposure to unsafe sanitation, household air pollution, childhood underweight, childhood stunting, and smoking each decreased by more than 25%. Global exposure for several occupational risks, high body-mass index (BMI), and drug use increased by more than 25% over the same period. All risks jointly evaluated in 2015 accounted for 57·8% (95% CI 56·6–58·8) of global deaths and 41·2% (39·8–42·8) of DALYs. In 2015, the ten largest contributors to global DALYs among Level 3 risks were high systolic blood pressure (211·8 million [192·7 million to 231·1 million] global DALYs), smoking (148·6 million [134·2 million to 163·1 million]), high fasting plasma glucose (143·1 million [125·1 million to 163·5 million]), high BMI (120·1 million [83·8 million to 158·4 million]), childhood undernutrition (113·3 million [103·9 million to 123·4 million]), ambient particulate matter (103·1 million [90·8 million to 115·1 million]), high total cholesterol (88·7 million [74·6 million to 105·7 million]), household air pollution (85·6 million [66·7 million to 106·1 million]), alcohol use (85·0 million [77·2 million to 93·0 million]), and diets high in sodium (83·0 million [49·3 million to 127·5 million]). From 1990 to 2015, attributable DALYs declined for micronutrient deficiencies, childhood undernutrition, unsafe sanitation and water, and household air pollution; reductions in risk-deleted DALY rates rather than reductions in exposure drove these declines. Rising exposure contributed to notable increases in attributable DALYs from high BMI, high fasting plasma glucose, occupational carcinogens, and drug use. Environmental risks and childhood undernutrition declined steadily with SDI; low physical activity, high BMI, and high fasting plasma glucose increased with SDI. In 119 countries, metabolic risks, such as high BMI and fasting plasma glucose, contributed the most attributable DALYs in 2015. Regionally, smoking still ranked among the leading five risk factors for attributable DALYs in 109 countries; childhood underweight and unsafe sex remained primary drivers of early death and disability in much of sub-Saharan Africa. Interpretation Declines in some key environmental risks have contributed to declines in critical infectious diseases. Some risks appear to be invariant to SDI. Increasing risks, including high BMI, high fasting plasma glucose, drug use, and some occupational exposures, contribute to rising burden from some conditions, but also provide opportunities for intervention. Some highly preventable risks, such as smoking, remain major causes of attributable DALYs, even as exposure is declining. Public policy makers need to pay attention to the risks that are increasingly major contributors to global burden. Funding Bill & Melinda Gates Foundation

    A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds

    Get PDF
    The pallido-recipient thalamus transmits information from the basal ganglia to the cortex and is critical for motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the basal ganglia, but the role of nonpallidal inputs, such as excitatory inputs from cortex, remains unclear. We simultaneously recorded from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a basal ganglia–recipient thalamic nucleus that is necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor cortical nucleus that is also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals that are important for exploratory behavior and learning.National Institutes of Health (U.S.) (Grant R01DC009183)National Institutes of Health (U.S.) (Grant K99NS067062)Damon Runyon Cancer Research Foundation (Postdoctoral Fellowship)Charles A. King Trust (Postdoctoral Fellowship

    A unified framework for managing provenance information in translational research

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A critical aspect of the NIH <it>Translational Research </it>roadmap, which seeks to accelerate the delivery of "bench-side" discoveries to patient's "bedside," is the management of the <it>provenance </it>metadata that keeps track of the origin and history of data resources as they traverse the path from the bench to the bedside and back. A comprehensive provenance framework is essential for researchers to verify the quality of data, reproduce scientific results published in peer-reviewed literature, validate scientific process, and associate trust value with data and results. Traditional approaches to provenance management have focused on only partial sections of the translational research life cycle and they do not incorporate "domain semantics", which is essential to support domain-specific querying and analysis by scientists.</p> <p>Results</p> <p>We identify a common set of challenges in managing provenance information across the <it>pre-publication </it>and <it>post-publication </it>phases of data in the translational research lifecycle. We define the semantic provenance framework (SPF), underpinned by the Provenir upper-level provenance ontology, to address these challenges in the four stages of provenance metadata:</p> <p>(a) Provenance <b>collection </b>- during data generation</p> <p>(b) Provenance <b>representation </b>- to support interoperability, reasoning, and incorporate domain semantics</p> <p>(c) Provenance <b>storage </b>and <b>propagation </b>- to allow efficient storage and seamless propagation of provenance as the data is transferred across applications</p> <p>(d) Provenance <b>query </b>- to support queries with increasing complexity over large data size and also support knowledge discovery applications</p> <p>We apply the SPF to two exemplar translational research projects, namely the Semantic Problem Solving Environment for <it>Trypanosoma cruzi </it>(<it>T.cruzi </it>SPSE) and the Biomedical Knowledge Repository (BKR) project, to demonstrate its effectiveness.</p> <p>Conclusions</p> <p>The SPF provides a unified framework to effectively manage provenance of translational research data during pre and post-publication phases. This framework is underpinned by an upper-level provenance ontology called Provenir that is extended to create domain-specific provenance ontologies to facilitate provenance interoperability, seamless propagation of provenance, automated querying, and analysis.</p

    Learning a Prior on Regulatory Potential from eQTL Data

    Get PDF
    Genome-wide RNA expression data provide a detailed view of an organism's biological state; hence, a dataset measuring expression variation between genetically diverse individuals (eQTL data) may provide important insights into the genetics of complex traits. However, with data from a relatively small number of individuals, it is difficult to distinguish true causal polymorphisms from the large number of possibilities. The problem is particularly challenging in populations with significant linkage disequilibrium, where traits are often linked to large chromosomal regions containing many genes. Here, we present a novel method, Lirnet, that automatically learns a regulatory potential for each sequence polymorphism, estimating how likely it is to have a significant effect on gene expression. This regulatory potential is defined in terms of “regulatory features”—including the function of the gene and the conservation, type, and position of genetic polymorphisms—that are available for any organism. The extent to which the different features influence the regulatory potential is learned automatically, making Lirnet readily applicable to different datasets, organisms, and feature sets. We apply Lirnet both to the human HapMap eQTL dataset and to a yeast eQTL dataset and provide statistical and biological results demonstrating that Lirnet produces significantly better regulatory programs than other recent approaches. We demonstrate in the yeast data that Lirnet can correctly suggest a specific causal sequence variation within a large, linked chromosomal region. In one example, Lirnet uncovered a novel, experimentally validated connection between Puf3—a sequence-specific RNA binding protein—and P-bodies—cytoplasmic structures that regulate translation and RNA stability—as well as the particular causative polymorphism, a SNP in Mkt1, that induces the variation in the pathway
    corecore