1,496 research outputs found

    Optimization of the Suburban Railway Train Operation Plan Based on the Zonal Mode

    Get PDF
    Traditional all-stop train operation mode cannot meet the demand of long travel distance and centralized travel of commuters very well. To meet this special travel demand, a zonal train operation mode based on “many-to-many” train stops is proposed. The coefficient of passenger exchange is used to locate suburban areas by depicting travel characteristics of commuters. Operational separating points within the suburban area are used as decision variables to analyze the combined cost components of this model, including passenger travel costs and railway operating costs. An integer programming model with the lowest overall cost is established, and the genetic algorithm is employed to solve it. The results proved good relative benefits in operation costs and travel time. And the sensitivity analysis of both coefficient of passenger exchange and passenger intensity has shown that the zonal operation mode is suitable for suburban railways with centralized travelers. However, the research also shows that when the passenger volume rose to a very high level, the number of zones would be limited by the maximized capacity of railway lines, which may cause the decline of the relative operational efficiency

    Gain Scheduling Control of Nonlinear Shock Motion Based on Equilibrium Manifold Linearization Model

    Get PDF
    AbstractThe equilibrium manifold linearization model of nonlinear shock motion is of higher accuracy and lower complexity over other models such as the small perturbation model and the piecewise-linear model. This paper analyzes the physical significance of the equilibrium manifold linearization model, and the self-feedback mechanism of shock motion is revealed. This helps to describe the stability and dynamics of shock motion. Based on the model, the paper puts forwards a gain scheduling control method for nonlinear shock motion. Simulation has shown the validity of the control scheme

    Iterative Forward Tuning Boosts In-context Learning in Language Models

    Full text link
    Large language models (LLMs) have exhibited an emergent in-context learning (ICL) ability. However, the ICL models that can solve ordinary cases are hardly extended to solve more complex tasks by processing the demonstration examples once. This single-turn ICL is incoordinate with the decision making process of humans by learning from analogy. In this paper, we propose an effective and efficient two-stage framework to boost ICL in LLMs by exploiting a dual form between Transformer attention and gradient descent-based optimization. Concretely, we divide the ICL process into "Deep-Thinking" and inference stages. The "Deep-Thinking" stage performs iterative forward optimization of demonstrations, which is expected to boost the reasoning abilities of LLMs at test time by "thinking" demonstrations multiple times. It produces accumulated meta-gradients by manipulating the Key-Value matrices in the self-attention modules of the Transformer. Then, the inference stage only takes the test query as input without concatenating demonstrations and applies the learned meta-gradients through attention for output prediction. In this way, demonstrations are not required during the inference stage since they are already learned and stored in the definitive meta-gradients. LLMs can be effectively and efficiently adapted to downstream tasks. Extensive experiments on ten classification and multiple-choice datasets show that our method achieves substantially better performance than standard ICL in terms of both accuracy and efficiency.Comment: 14 pages, 5 figure

    PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts

    Full text link
    Perceiving multi-modal information and fulfilling dialogues with humans is a long-term goal of artificial intelligence. Pre-training is commonly regarded as an effective approach for multi-modal dialogue. However, due to the limited availability of multi-modal dialogue data, there is still scarce research on multi-modal dialogue pre-training. Yet another intriguing challenge emerges from the encompassing nature of multi-modal dialogue, which involves various modalities and tasks. Moreover, new forms of tasks may arise at unpredictable points in the future. Hence, it is essential for designed multi-modal dialogue models to possess sufficient flexibility to adapt to such scenarios. This paper proposes \textbf{PaCE}, a unified, structured, compositional multi-modal dialogue pre-training framework. It utilizes a combination of several fundamental experts to accommodate multiple dialogue-related tasks and can be pre-trained using limited dialogue and extensive non-dialogue multi-modal data. Furthermore, we propose a progressive training method where old experts from the past can assist new experts, facilitating the expansion of their capabilities. Experimental results demonstrate that PaCE achieves state-of-the-art results on eight multi-modal dialog benchmarks.Comment: ACL 202

    Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning

    Full text link
    Table-based reasoning has shown remarkable progress in combining deep models with discrete reasoning, which requires reasoning over both free-form natural language (NL) questions and structured tabular data. However, previous table-based reasoning solutions usually suffer from significant performance degradation on huge evidence (tables). In addition, most existing methods struggle to reason over complex questions since the required information is scattered in different places. To alleviate the above challenges, we exploit large language models (LLMs) as decomposers for effective table-based reasoning, which (i) decompose huge evidence (a huge table) into sub-evidence (a small table) to mitigate the interference of useless information for table reasoning; and (ii) decompose complex questions into simpler sub-questions for text reasoning. Specifically, we first use the LLMs to break down the evidence (tables) involved in the current question, retaining the relevant evidence and excluding the remaining irrelevant evidence from the huge table. In addition, we propose a "parsing-execution-filling" strategy to alleviate the hallucination dilemma of the chain of thought by decoupling logic and numerical computation in each step. Extensive experiments show that our method can effectively leverage decomposed evidence and questions and outperforms the strong baselines on TabFact, WikiTableQuestion, and FetaQA datasets. Notably, our model outperforms human performance for the first time on the TabFact dataset.Comment: SIGIR 202

    Improving gold recovery from a refractory ore via Naâ‚‚SOâ‚„ assisted roasting and alkaline Naâ‚‚S leaching

    Get PDF
    Gold recovery from refractory gold ores with controlled roasting remained well below 80%. Na2SO4 was added in an O-2-enriched single stage roasting of a refractory gold ore to improve its gold recovery. Changes in physicochemical properties of the calcines suggested that this reduced the sintering as well as facilitated the formation of pores and a water soluble phase within the calcine. Thermodynamic analysis and leaching results demonstrated that Na2S solutions could effectively remove Sb species from the calcine. An extraction process that includes Na2SO4 assisted roasting and alkaline Na2S leaching is shown to be able to achieve a gold recovery of over 95% from the refractory ore
    • …
    corecore