699 research outputs found

    An Innovative Simulation Environment for Cross-Domain Policy Enforcement

    Get PDF
    Policy-based management is necessary for cross-domain organization collaborations and system integrations. In reality different systems from different organizations or domains have very different high-level policy representations and low-level enforcement mechanisms. To ensure the compatibility and enforceability of one policy set in another domain, a simulation environment is needed prior to actual policy deployment and enforcement code development. The goal of this paper is to propose an enforcement architecture and develop a simulation framework for cross-domain policy enforcement. The entire environment is used to simulate the problem of enforcing policies across domain boundaries when permanent or temporary collaborations have to span multiple domains. The middleware derived from this simulation environment can also be used to generate policy enforcement components directly for permanent integration or temporary interaction

    Policy Enforcement for Enterprise System Integration and Interoperability

    Get PDF
    A poster on a piece of middleware software to assist with enterprise system integration and interoperability

    A comparative study of two molecular mechanics models based on harmonic potentials

    Full text link
    We show that the two molecular mechanics models, the stick-spiral and the beam models, predict considerably different mechanical properties of materials based on energy equivalence. The difference between the two models is independent of the materials since all parameters of the beam model are obtained from the harmonic potentials. We demonstrate this difference for finite width graphene nanoribbons and a single polyethylene chain comparing results of the molecular dynamics (MD) simulations with harmonic potentials and the finite element method with the beam model. We also find that the difference strongly depends on the loading modes, chirality and width of the graphene nanoribbons, and it increases with decreasing width of the nanoribbons under pure bending condition. The maximum difference of the predicted mechanical properties using the two models can exceed 300% in different loading modes. Comparing the two models with the MD results of AIREBO potential, we find that the stick-spiral model overestimates and the beam model underestimates the mechanical properties in narrow armchair graphene nanoribbons under pure bending condition.Comment: 40 pages, 21 figure

    Farmer Field School and Bt cotton in China : an economic analysis

    Get PDF
    [no abstract

    Failure prediction of ultra capacitor stack using fuzzy inference system

    Get PDF
    The failure of the ultracapacitor was significantly accelerated by elevated temperature or increased voltage. Because of the capacitance difference between the capacitor cells, after a number of deep charging/discharging cycles, the voltage difference between cells will be enlarged. This will accelerate the aging of the weak ultracapacitors and affect the output power. So, to improve stack reliability, a correct and timely failure prediction is essential. Based on diverse faults, a fuzzy rule-based inference system, which could approximate human reasoning, was considered. With this method we can reduce uncertainty, inconvenience and inefficiency resulting from the inherent factors. The simulate results under industrial application conditions are given to verify the method

    Exploring the Value of Pre-trained Language Models for Clinical Named Entity Recognition

    Full text link
    The practice of fine-tuning Pre-trained Language Models (PLMs) from general or domain-specific data to a specific task with limited resources, has gained popularity within the field of natural language processing (NLP). In this work, we re-visit this assumption and carry out an investigation in clinical NLP, specifically Named Entity Recognition on drugs and their related attributes. We compare Transformer models that are trained from scratch to fine-tuned BERT-based LLMs namely BERT, BioBERT, and ClinicalBERT. Furthermore, we examine the impact of an additional CRF layer on such models to encourage contextual learning. We use n2c2-2018 shared task data for model development and evaluations. The experimental outcomes show that 1) CRF layers improved all language models; 2) referring to BIO-strict span level evaluation using macro-average F1 score, although the fine-tuned LLMs achieved 0.83+ scores, the TransformerCRF model trained from scratch achieved 0.78+, demonstrating comparable performances with much lower cost - e.g. with 39.80\% less training parameters; 3) referring to BIO-strict span-level evaluation using weighted-average F1 score, ClinicalBERT-CRF, BERT-CRF, and TransformerCRF exhibited lower score differences, with 97.59\%/97.44\%/96.84\% respectively. 4) applying efficient training by down-sampling for better data distribution further reduced the training cost and need for data, while maintaining similar scores - i.e. around 0.02 points lower compared to using the full dataset. Our models will be hosted at \url{https://github.com/HECTA-UoM/TransformerCRF}Comment: working paper - Large Language Models, Fine-tuning LLMs, Clinical NLP, Medication Mining, AI for Healthcar
    corecore