4,987 research outputs found

    Analytical behaviour of concrete-encased CFST box stub columns under axial compression

    Full text link
    [EN] Concrete-encased CFST (concrete-filled steel tube) members have been widely used in high-rise buildings and bridge structures. In this paper, the axial performance of a typical concrete-encased CFST box member with inner CFST and outer reinforced concrete (RC) is investigated. A finite element analysis (FEA) model is established to analyze the compressive behavior of the composite member. The material nonlinearity and the interaction between concrete and steel tube are considered. A good agreement is achieved between the measured and predicted results in terms of the failure mode and the load-deformation relation. The verified FEA model is then used to conduct the full range analysis on the load versus deformation relations. The loading distributions of different components inclouding concrete, steel tube and longitudinal bar during four stages are discussed. Typical failure modes, internal force distribution, stress development and the contact stress between concrete and steel tube are also presented. The parametric study on the compressive behavior is conducted to investigate the effects of various parameters, e.g. the strength of concrete and steel, longitudinal bar ratio and stirrup space on the sectional capacity and the ductility of the concrete-encased CSFT box member.Chen, J.; Han, L.; Wang, F.; Mu, T. (2018). Analytical behaviour of concrete-encased CFST box stub columns under axial compression. En Proceedings of the 12th International Conference on Advances in Steel-Concrete Composite Structures. ASCCS 2018. Editorial Universitat Politècnica de València. 401-408. https://doi.org/10.4995/ASCCS2018.2018.6966OCS40140

    Beyond Static Evaluation: A Dynamic Approach to Assessing AI Assistants' API Invocation Capabilities

    Full text link
    With the rise of Large Language Models (LLMs), AI assistants' ability to utilize tools, especially through API calls, has advanced notably. This progress has necessitated more accurate evaluation methods. Many existing studies adopt static evaluation, where they assess AI assistants' API call based on pre-defined dialogue histories. However, such evaluation method can be misleading, as an AI assistant might fail in generating API calls from preceding human interaction in real cases. Instead of the resource-intensive method of direct human-machine interactions, we propose Automated Dynamic Evaluation (AutoDE) to assess an assistant's API call capability without human involvement. In our framework, we endeavor to closely mirror genuine human conversation patterns in human-machine interactions, using a LLM-based user agent, equipped with a user script to ensure human alignment. Experimental results highlight that AutoDE uncovers errors overlooked by static evaluations, aligning more closely with human assessment. Testing four AI assistants using our crafted benchmark, our method further mirrored human evaluation compared to conventional static evaluations.Comment: Accepted at LREC-COLING 202
    • …
    corecore