195 research outputs found

    Development of a Web-Based Tutorial for Visual Builder --- The GUI Designer in IBM Visualage Cobol

    Get PDF

    Optimal Control of a Heterogeneous Two Server System With Consideration for Power and Performance

    Get PDF
    In this thesis we consider a system of two heterogeneous servers with a shared queue, and examine a scheduling policy for the optimal control of such a system. Previous results by Lin and Kumar and Koole found that a threshold policy, i.e., refraining from assigning a job to a slow server until a certain threshold has been exceed in the job queue, is optimal when seeking only to minimize the mean sojourn time of a job in the system. We build upon these results and generalise the analytical proof of the threshold policy's optimality to take into account power consumption as another performance metric, in the setting where the faster server is more efficient. We also obtain preliminary results for a setting where the slower server is more efficient, under the restriction of low arrival rates. We use experimental data from simulations to provide an assessment of the real world applicability of a threshold policy in this setting; a comparison between a threshold policy with optimal thresholds and a first-come-first-serve policy shows that it achieves a cost improvement of up to 29.19% over the naive policy

    OverPrompt: Enhancing ChatGPT Capabilities through an Efficient In-Context Learning Approach

    Full text link
    The exceptional performance of pre-trained large language models has revolutionised various applications, but their adoption in production environments is hindered by prohibitive costs and inefficiencies, particularly when utilising long prompts. This paper proposes OverPrompt, an in-context learning method aimed at improving LLM efficiency and performance by processing multiple inputs in parallel. Evaluated across diverse datasets, OverPrompt enhances task efficiency and integrates a diverse range of examples for improved performance. Particularly, it amplifies fact-checking and sentiment analysis tasks when supplemented with contextual information. Synthetic data grouping further enhances performance, suggesting a viable approach for data augmentation

    Efficient Hindsight Experience Replay with Transformed Data Augmentation

    Get PDF
    Motion control of robots is a high-dimensional, nonlinear control problem that is often difficult to handle using traditional dynamical path planning means. Reinforcement learning is currently an effective means to solve robot motion control problems, but reinforcement learning has disadvantages such as high number of trials and errors and sparse rewards, which restrict the application efficiency of reinforcement learning. The Hindsight Experience Replay(HER) algorithm is a reinforcement learning algorithm that solves the reward sparsity problem by constructing virtual target values. However, the HER algorithm still suffers from the problem of long time in the early stage of training, and there is still room for improving its sample utilization efficiency. Augmentation by existing data to improve training efficiency has been widely used in supervised learning, but is less applied in the field of reinforcement learning. In this paper, we propose the Hindsight Experience Replay with Transformed Data Augmentation (TDAHER) algorithm by constructing a transformed data augmentation method for reinforcement learning samples, combined with the HER algorithm. And in order to solve the problem of the accuracy of the augmented samples in the later stage of training, the decaying participation factor method is introduced. After the comparison of four simulated robot control tasks, it is proved that the algorithm can effectively improve the training efficiency of reinforcement learning

    CUE: An Uncertainty Interpretation Framework for Text Classifiers Built on Pre-Trained Language Models

    Full text link
    Text classifiers built on Pre-trained Language Models (PLMs) have achieved remarkable progress in various tasks including sentiment analysis, natural language inference, and question-answering. However, the occurrence of uncertain predictions by these classifiers poses a challenge to their reliability when deployed in practical applications. Much effort has been devoted to designing various probes in order to understand what PLMs capture. But few studies have delved into factors influencing PLM-based classifiers' predictive uncertainty. In this paper, we propose a novel framework, called CUE, which aims to interpret uncertainties inherent in the predictions of PLM-based models. In particular, we first map PLM-encoded representations to a latent space via a variational auto-encoder. We then generate text representations by perturbing the latent space which causes fluctuation in predictive uncertainty. By comparing the difference in predictive uncertainty between the perturbed and the original text representations, we are able to identify the latent dimensions responsible for uncertainty and subsequently trace back to the input features that contribute to such uncertainty. Our extensive experiments on four benchmark datasets encompassing linguistic acceptability classification, emotion classification, and natural language inference show the feasibility of our proposed framework. Our source code is available at: https://github.com/lijiazheng99/CUE.Comment: Accepted to UAI 202

    Semantic-aware Transmission Scheduling: a Monotonicity-driven Deep Reinforcement Learning Approach

    Full text link
    For cyber-physical systems in the 6G era, semantic communications connecting distributed devices for dynamic control and remote state estimation are required to guarantee application-level performance, not merely focus on communication-centric performance. Semantics here is a measure of the usefulness of information transmissions. Semantic-aware transmission scheduling of a large system often involves a large decision-making space, and the optimal policy cannot be obtained by existing algorithms effectively. In this paper, we first investigate the fundamental properties of the optimal semantic-aware scheduling policy and then develop advanced deep reinforcement learning (DRL) algorithms by leveraging the theoretical guidelines. Our numerical results show that the proposed algorithms can substantially reduce training time and enhance training performance compared to benchmark algorithms.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    CodeScore: Evaluating Code Generation by Learning Code Execution

    Full text link
    A proper code evaluation metric (CEM) profoundly impacts the evolution of code generation, which is an important research field in NLP and software engineering. Prevailing CEMs can be categorized into match-based CEMs (e.g., BLEU, Accuracy, and CodeBLEU) and execution-based CEMs (e.g., AvgPassRatio and Pass@k), but both of them suffer from some issues. The former only measures differences in surface form regardless of the functional equivalence of codes, while the latter has huge execution overheads, including collecting expensive test cases, resolving tedious execution dependencies, and enormous execution time. To address these issues, in this paper, we propose CodeScore, an efficient and effective CEM for code generation, which estimates test case PassRatio of generated code without executing code. We also present a framework named UniCE for training unified code evaluation models by learning code execution, i.e., learning PassRatio and Executability of generated code. In order to learn code execution comprehensively, we construct more than 100 test cases for each task in several popular benchmark datasets, covering MBPP, APPS, and HumanEval. Experimental results show that CodeScore has obtained a state-of-the-art correlation with execution-based CEMs. CodeScore is strongly correlated with AvgPassPatio, and binary CodeScore is moderately correlated with Pass@1. In particular, CodeScore eliminates the need for test cases and execution dependencies in inference, and CodeScore reduces execution time by three orders of magnitude compared to AvgPassPatio and Pass@1

    Correlated diffusion of colloidal particles near a liquid-liquid interface

    Get PDF
    Optical microscopy and multi-particle tracking are used to investigate the cross-correlated diffusion of quasi two-dimensional (2D) colloidal particles near an oil-water interface. It is shown that the effect of the interface on correlated diffusion is asymmetric. Along the line joining the centers of particles, the amplitude of correlated diffusion coefficient D(r){D}_{\|}(r) is enhanced by the interface, while the decay rate of D(r){D}_{\|}(r) is hardly affected. At the direction perpendicular to the line, the decay rate of D(r){D}_{\bot}(r) is enhanced at short inter-particle separation rr. This enhancing effect fades at the long rr. In addition, both D(r)D_{\|}(r) and D(r)D_{\bot}(r) are independent of the colloidal area fraction nn at long rr, which indicates that the hydrodynamic interactions (HIs) among the particles are dominated by that through the surrounding fluid at this region. However, at short rr, D(r)D_{\bot}(r) is dependent on nn, which suggests the HIs are more contributed from the 2D particle monolayer self.Comment: 5 figure

    Effect of Slow Solidification of Ultra-thick Continuous Casting Slab on Solidification Structure and Macrosegregation

    Get PDF
    The slow solidification method of ultra-thick slab is in the ascendancy, and the macrosegregation is an important parameter of slab quality. Besides, solidification structure is also a crucial indicator of slab, such asSecondary Dendrite Arm Spacing (SDAS). In this paper, the slice moving boundary model was selected and optimized, and the influence on SDAS and macro segregation under slow solidification condition are investigated. Researches show that the SDAS increases by increasing supercooling and cooling intensity. When the superheating increases from 20 K to 40 K, the SDAS increases from 156,8 μm to 158,9 μm. By using mid-strong cooling, the segregation ratio decreases from 1,4331 to 1,3836, and the segregation degree decreases from 0,3535 to 0,3196. According to the discussions, a new method of improving the final quality of slow solidification continuous casting slabs is provided, which also has a high development prospect in the production of large-section casting slabs
    corecore