171 research outputs found

    Space-and-time-synchronized simultaneous vehicle tracking/formation using cascaded prescribed-time control

    Full text link
    In this paper, we present a space-and-time-synchronized control method with application to the simultaneous tracking/formation. In the framework of polar coordinates, through correlating and decoupling the reference/actual kinematics between the self vehicle and target, time and space are separated, controlled independently. As such, the specified state can be achieved at the predetermined terminal time, meanwhile, the relative trajectory in space is independent of time. In addition, for the stabilization before the predesigned time, a cascaded prescribed-time control theorem is provided as the preliminary of vehicle tracking control. The obtained results can be directly extended to the simultaneous tracking/formation of multiple vehicles. Finally, numerical examples are provided to verify the effectiveness and superiority of the proposed scheme.Comment: 10 pages, 5 figures. International Journal of Robust and Nonlinear Control 202

    An Experimental Study of Catalytic Effects on Reaction Kinetics and Producer Gas in Gasification of Coal-Biomass Blend Chars with Steam

    Get PDF
    The objective of this thesis is to experimentally investigate the performance of steam gasification of chars of pure coal (lignite, sub-bituminous), pure biomass (radiata pine, eucalyptus nitens) and their blends. The influences of gasification temperature, types of coal and biomass, coal-biomass blending ratio, alkali and alkaline earth metal (AAEM) in lignite, on specific gasification characteristics (producer gas composition and yield, char reactivity) were studied. In addition, synergistic effects in co-gasification of coal-biomass blend char were also investigated. This project is in accordance with objectives of the BISGAS Consortium. In this study, experiments were performed in a bench-scale gasifier at gasification temperatures of 850°C, 900°C and 950°C, respectively. Two types of coals (lignite and sub-bituminous) and two kinds of biomass (radiata pine and eucalyptus nitens) from New Zealand were selected as sample fuels. From these raw materials, the chars with coal-to-biomass blending ratios of 0:100 (pure coal), 20:80, 50:50, 80:20 and 100:0 (pure biomass), which were derived through the devolatilization at temperature of 900°C for 7 minutes, were gasified with steam as gasification agent. During the gasification tests, the producer gas composition and gas production were continuously analysed using a Micro gas chromatograph. When the gas production was undetectable, the gasification process was assumed to be completed and the gasification time was recorded. The gasification producer gas consisted of three main gas components: hydrogen (H2), carbon monoxide (CO) and carbon dioxide (CO2). The results from gasification of chars of individual solid fuels (coal or biomass) confirmed that biomass char gasification was faster than coal char gasification. The influences of gasification temperatures were shown as: when gasification temperature increased, the H2 yield increased in coal char gasification but decreased in biomass char gasification. In the meantime, CO yields increased while CO2 yields decreased in both coal char and biomass char gasification. In addition, the char reactivity of all the pure fuel samples increased with elevated gasification temperatures. The results from co-gasification of coal-biomass blend char exhibited that the syngas production rate, which is defined as the total gas production divided by the gasification completion time, was enhanced by an increase in gasification temperatures as well as an increase in the biomass proportion in the blend. The AAEM species played a significant catalytic role in both gasification of pure coal chars and co-gasification of coal-biomass blend chars. The presence of AAEM increased the producer gas yield and enhanced the char reactivity. The positive synergistic effects of the coal-biomass blending char on syngas production rate only existed in the co-gasification of lignite-eucalyptus nitens blend chars. The other blend chars showed either insignificant synergistic effects or negative effects on the syngas production rate

    PO-110 The relationship between beverage consumption and overweight of university students

    Get PDF
    Objective Previous studies have shown a clear correlation between university students' eating habits and the rate of obesity. According to the WHO’s information, obesity was not only a chronic disease that did harm to health, but also a risk factor for a variety of chronic disease such as type 2 diabetes, coronary heart disease and respiratory disease, etc. With the booming of economy and the continuous improvement of university students' living standard, the growth of university students' drinking consumption and the diversification of consumption patterns, beverage had gradually become a part of university students' daily diet. The relationship between intake of drinks and health in university students is not clear. Knowing the relationship between the consumption habits and the prevalence rate of overweight in university students can help university students to establish a healthy lifestyle, control weight and lead a reasonable beverage consumption. Thus a questionnaire survey was conducted and students were divided into beverage group and non-beverage group for horizontal comparison, so to investigate the relationship between the beverage consumption habit of university students and the prevalence of overweight. Methods We conducted a questionnaire survey on university students (130 males and 115 females) from a university through a self-designed questionnaire. The method of investigation was a self-filling questionnaire which includes the consumption of beverages, the frequency and the variety of beverage per week. All the subjects' height, weight, waist circumference and hip circumference were measured, the BMI and waist-hip ratio were calculated. Alpha reliability coefficient, non-parametric test, chi-square test, kruskal-wallis H test, non-conditional Logistic regression analysis of dichotomies and Logistic multivariate analysis were used for statistical analysis of the data. Results The intake of all kinds of beverage in university students was as follows: the sugary beverage (carbonic acid and juice) was up to 55.5%, the dairy products was 19.5%, the tea beverage (no sugar or low sugar) was 12.5%, and the functional beverages was 25%. Male students drunk carbonic acid beverage more than female (P<0.01), and female students drunk fruit juice was significantly higher than that of male students (P<0.01).The overweight and central obesity rate of male and female students were roughly equivalent (P > 0.05). Overweight and obese (BMI ≥24) students consumed more sugary drinks than normal weight students (P<0.05). Multifactor’s logistic regression analysis showed that the risk factors associated with overweight and obesity were sugary drinks and purchase times; the risk factors associated with central obesity included sex and the frequency of beverage purchased. Conclusions The consumption all kinds of sugary drinks in overweight and obese university students were higher than that of normal weight students. Male university students liked "carbonated" and "tea" drinks more than female university students, while female students liked "juice" and "milk" drinks more than male students. Sugary drinks could be a risk factor to obesity. And female students are more likely to be central obese than male students. There is a certain correlation between the intake of sugary beverages in university students’ overweight and central obesity. This research shows that the intake of sugar beverages was closely correlated to overweight and central obesity.It’s important for university students to reduce the intake of sugary beverages appropriately and establish a correct and healthy consumption concept of beverage

    An Empirical Study of CLIP for Text-based Person Search

    Full text link
    Text-based Person Search (TBPS) aims to retrieve the person images using natural language descriptions. Recently, Contrastive Language Image Pretraining (CLIP), a universal large cross-modal vision-language pre-training model, has remarkably performed over various cross-modal downstream tasks due to its powerful cross-modal semantic learning capacity. TPBS, as a fine-grained cross-modal retrieval task, is also facing the rise of research on the CLIP-based TBPS. In order to explore the potential of the visual-language pre-training model for downstream TBPS tasks, this paper makes the first attempt to conduct a comprehensive empirical study of CLIP for TBPS and thus contribute a straightforward, incremental, yet strong TBPS-CLIP baseline to the TBPS community. We revisit critical design considerations under CLIP, including data augmentation and loss function. The model, with the aforementioned designs and practical training tricks, can attain satisfactory performance without any sophisticated modules. Also, we conduct the probing experiments of TBPS-CLIP in model generalization and model compression, demonstrating the effectiveness of TBPS-CLIP from various aspects. This work is expected to provide empirical insights and highlight future CLIP-based TBPS research.Comment: 13 pages, 5 fiugres and 17 tables. Code is available at https://github.com/Flame-Chasers/TBPS-CLI

    ArguGPT: evaluating, understanding and identifying argumentative essays generated by GPT models

    Full text link
    AI generated content (AIGC) presents considerable challenge to educators around the world. Instructors need to be able to detect such text generated by large language models, either with the naked eye or with the help of some tools. There is also growing need to understand the lexical, syntactic and stylistic features of AIGC. To address these challenges in English language teaching, we first present ArguGPT, a balanced corpus of 4,038 argumentative essays generated by 7 GPT models in response to essay prompts from three sources: (1) in-class or homework exercises, (2) TOEFL and (3) GRE writing tasks. Machine-generated texts are paired with roughly equal number of human-written essays with three score levels matched in essay prompts. We then hire English instructors to distinguish machine essays from human ones. Results show that when first exposed to machine-generated essays, the instructors only have an accuracy of 61% in detecting them. But the number rises to 67% after one round of minimal self-training. Next, we perform linguistic analyses of these essays, which show that machines produce sentences with more complex syntactic structures while human essays tend to be lexically more complex. Finally, we test existing AIGC detectors and build our own detectors using SVMs and RoBERTa. Results suggest that a RoBERTa fine-tuned with the training set of ArguGPT achieves above 90% accuracy in both essay- and sentence-level classification. To the best of our knowledge, this is the first comprehensive analysis of argumentative essays produced by generative large language models. Machine-authored essays in ArguGPT and our models will be made publicly available at https://github.com/huhailinguist/ArguGP

    MELA: Multilingual Evaluation of Linguistic Acceptability

    Full text link
    Recent benchmarks for Large Language Models (LLMs) have mostly focused on application-driven tasks such as complex reasoning and code generation, and this has led to a scarcity in purely linguistic evaluation of LLMs. Against this background, we introduce Multilingual Evaluation of Linguistic Acceptability -- MELA, the first multilingual benchmark on linguistic acceptability with 48K samples covering 10 languages from a diverse set of language families. We establish baselines of commonly used LLMs along with supervised models, and conduct cross-lingual transfer and multi-task learning experiments with XLM-R. In pursuit of multilingual interpretability, we analyze the weights of fine-tuned XLM-R to explore the possibility of identifying transfer difficulty between languages. Our results show that ChatGPT benefits much from in-context examples but still lags behind fine-tuned XLM-R, while the performance of GPT-4 is on par with fine-tuned XLM-R even in zero-shot setting. Cross-lingual and multi-task learning experiments show that unlike semantic tasks, in-language training data is crucial in acceptability judgements. Results in layerwise probing indicate that the upper layers of XLM-R become a task-specific but language-agnostic region for multilingual acceptability judgment. We also introduce the concept of conflicting weight, which could be a potential indicator for the difficulty of cross-lingual transfer between languages. Our data will be available at https://github.com/sjtu-compling/MELA.Comment: Work in progres

    Self-distillation Regularized Connectionist Temporal Classification Loss for Text Recognition: A Simple Yet Effective Approach

    Full text link
    Text recognition methods are gaining rapid development. Some advanced techniques, e.g., powerful modules, language models, and un- and semi-supervised learning schemes, consecutively push the performance on public benchmarks forward. However, the problem of how to better optimize a text recognition model from the perspective of loss functions is largely overlooked. CTC-based methods, widely used in practice due to their good balance between performance and inference speed, still grapple with accuracy degradation. This is because CTC loss emphasizes the optimization of the entire sequence target while neglecting to learn individual characters. We propose a self-distillation scheme for CTC-based model to address this issue. It incorporates a framewise regularization term in CTC loss to emphasize individual supervision, and leverages the maximizing-a-posteriori of latent alignment to solve the inconsistency problem that arises in distillation between CTC-based models. We refer to the regularized CTC loss as Distillation Connectionist Temporal Classification (DCTC) loss. DCTC loss is module-free, requiring no extra parameters, longer inference lag, or additional training data or phases. Extensive experiments on public benchmarks demonstrate that DCTC can boost text recognition model accuracy by up to 2.6%, without any of these drawbacks.Comment: Ziyin Zhang and Ning Lu are co-first author

    Differentially Private Stream Processing at Scale

    Full text link
    We design, to the best of our knowledge, the first differentially private (DP) stream processing system at scale. Our system --Differential Privacy SQL Pipelines (DP-SQLP)-- is built using a streaming framework similar to Spark streaming, and is built on top of the Spanner database and the F1 query engine from Google. Towards designing DP-SQLP we make both algorithmic and systemic advances, namely, we (i) design a novel DP key selection algorithm that can operate on an unbounded set of possible keys, and can scale to one billion keys that users have contributed, (ii) design a preemptive execution scheme for DP key selection that avoids enumerating all the keys at each triggering time, and (iii) use algorithmic techniques from DP continual observation to release a continual DP histogram of user contributions to different keys over the stream length. We empirically demonstrate the efficacy by obtaining at least 16×16\times reduction in error over meaningful baselines we consider

    Revisiting Acceptability Judgements

    Full text link
    In this work, we revisit linguistic acceptability in the context of large language models. We introduce CoLAC - Corpus of Linguistic Acceptability in Chinese, the first large-scale acceptability dataset for a non-Indo-European language. It is verified by native speakers and is the first acceptability dataset that comes with two sets of labels: a linguist label and a crowd label. Our experiments show that even the largest InstructGPT model performs only at chance level on CoLAC, while ChatGPT's performance (48.30 MCC) is also much below supervised models (59.03 MCC) and human (65.11 MCC). Through cross-lingual transfer experiments and fine-grained linguistic analysis, we provide detailed analysis of the model predictions and demonstrate for the first time that knowledge of linguistic acceptability can be transferred across typologically distinct languages, as well as be traced back to pre-training. Our dataset is publicly available at \url{https://github.com/huhailinguist/CoLAC}
    corecore