46 research outputs found

    The human response study to whole-body vibration in the cab of heavy duty truck

    Get PDF
    Lower back pains are observed to be the most significant problem for most of the industrial workers who operate commercial trucks. Several factors such as road type, truck type, load, etc, have been found to affect the vibration exposure on the truck drivers. The main purpose of the current research is to collect the responses of Whole-body vibration to the truck driver and analyze the current levels of excitation from a variety of trucks. Present thesis work also examines the effects of different trucks, road types and loads to Whole-body vibration. Data collected in the United States on different types of trucks were processed with different processors and analysed as per the international standards: ISO 2361-1. First set of data were taken with HVM-100 on a scheduled on-road route from driver\u27s seat cushion on different roads and load conditions.Second set of data were collected by DEWE data acquisition system from the trucks running on the same on-road route, with the application of additional transducers on driver\u27s seat back, passenger\u27s seat cushion as well as the cab floor. The frequency-weighted r.m.s accelerations were compared by different trucks on two different road types: interstate and rural highway for HVM data. The results from the same trucks with loaded trailer or without loaded trailer were also discussed in this thesis. The data recorded by DEWE system were analyzed with Matlab program to compare the frequency-weighted accelerations for different trucks. Additional analysis with VDV and jerk were also done. Road type was the primary factor affecting the driver\u27s exposure. For both studies, the minimum 8-hr and the minimum 11-hr standard limits requiring a medical examination set by the standard for health were exceeded several times whereas the same for comfort was exceeded a lot of times.Overall, the driver was found to be safe as per ISO 2631-1 but the comfort levels were often exceeded. It is suggested that necessary action be taken to increase the comfort

    Stability Analysis of the Rotary Drill-String

    Get PDF
    Oil and natural gas are major energy sources for modern society. A rotary drilling system is the best known technology to extract them from underground. The vibration and stability of drilling systems have been studied for decades to improve drilling efficiency and protect expensive down-hole components. It is well known that severe drill-string vibrations are caused by many different loads: axial loads such as the hook load and the self-weight of the drill-string, end torques applied by the surface motor and restrained at the bit, the inertial load caused by whirling, the fluid drag force, and the contact force between the borehole wall and the drill-string. The drill-string is usually subjected to a complex combination of these loads. The motivation for this dissertation is the need to understand the complex vibration states and the stability of the drill-string in order to better control its constructive and destructive potential. A mathematical model is proposed to describe the steady-state stability of a long, vertical, rectilinear drill-string. The model accounts for a complex combination of constant and variable loads that affect the behavior of drill-strings. The first critical values of these loads and the corresponding mode shape are obtained by the analytical method and the Rayleigh-Ritz method. COMSOL and ABAQUS are used to validate the numerical results for the cases without analytical solutions. With these results, we see that the Rayleigh-Ritz method gives accurate results and is a good way for us to understand more deeply the dynamics of the drilling process and predict the instability of the drilling system

    Investigating Zero- and Few-shot Generalization in Fact Verification

    Full text link
    In this paper, we explore zero- and few-shot generalization for fact verification (FV), which aims to generalize the FV model trained on well-resourced domains (e.g., Wikipedia) to low-resourced domains that lack human annotations. To this end, we first construct a benchmark dataset collection which contains 11 FV datasets representing 6 domains. We conduct an empirical analysis of generalization across these FV datasets, finding that current models generalize poorly. Our analysis reveals that several factors affect generalization, including dataset size, length of evidence, and the type of claims. Finally, we show that two directions of work improve generalization: 1) incorporating domain knowledge via pretraining on specialized domains, and 2) automatically generating training data via claim generation.Comment: AACL-IJCNLP 2023 (main conference, long paper

    Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning

    Full text link
    Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language problem into a symbolic formulation. Afterward, a deterministic symbolic solver performs inference on the formulated problem. We also introduce a self-refinement module, which utilizes the symbolic solver's error messages to revise symbolic formalizations. We demonstrate Logic-LM's effectiveness on five logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, LogicalDeduction, and AR-LSAT. On average, Logic-LM achieves a significant performance boost of 39.2% over using LLM alone with standard prompting and 18.4% over LLM with chain-of-thought prompting. Our findings suggest that Logic-LM, by combining LLMs with symbolic logic, offers a promising avenue for faithful logical reasoning. Code and data are publicly available at https://github.com/teacherpeterpan/Logic-LLM.Comment: EMNLP 2023 (Findings, long paper

    MAF: Multi-Aspect Feedback for Improving Reasoning in Large Language Models

    Full text link
    Language Models (LMs) have shown impressive performance in various natural language tasks. However, when it comes to natural language reasoning, LMs still face challenges such as hallucination, generating incorrect intermediate reasoning steps, and making mathematical errors. Recent research has focused on enhancing LMs through self-improvement using feedback. Nevertheless, existing approaches relying on a single generic feedback source fail to address the diverse error types found in LM-generated reasoning chains. In this work, we propose Multi-Aspect Feedback, an iterative refinement framework that integrates multiple feedback modules, including frozen LMs and external tools, each focusing on a specific error category. Our experimental results demonstrate the efficacy of our approach to addressing several errors in the LM-generated reasoning chain and thus improving the overall performance of an LM in several reasoning tasks. We see a relative improvement of up to 20% in Mathematical Reasoning and up to 18% in Logical Entailment.Comment: Accepted at EMNLP 2023 Main Conference, Camera Read

    ContraQA: Question Answering under Contradicting Contexts

    Full text link
    With a rise in false, inaccurate, and misleading information in propaganda, news, and social media, real-world Question Answering (QA) systems face the challenges of synthesizing and reasoning over contradicting information to derive correct answers. This urgency gives rise to the need to make QA systems robust to misinformation, a topic previously unexplored. We study the risk of misinformation to QA models by investigating the behavior of the QA model under contradicting contexts that are mixed with both real and fake information. We create the first large-scale dataset for this problem, namely Contra-QA, which contains over 10K human-written and model-generated contradicting pairs of contexts. Experiments show that QA models are vulnerable under contradicting contexts brought by misinformation. To defend against such a threat, we build a misinformation-aware QA system as a counter-measure that integrates question answering and misinformation detection in a joint fashion.Comment: Technical repor

    SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim Verification on Scientific Tables

    Full text link
    Current scientific fact-checking benchmarks exhibit several shortcomings, such as biases arising from crowd-sourced claims and an over-reliance on text-based evidence. We present SCITAB, a challenging evaluation dataset consisting of 1.2K expert-verified scientific claims that 1) originate from authentic scientific publications and 2) require compositional reasoning for verification. The claims are paired with evidence-containing scientific tables annotated with labels. Through extensive evaluations, we demonstrate that SCITAB poses a significant challenge to state-of-the-art models, including table-based pretraining models and large language models. All models except GPT-4 achieved performance barely above random guessing. Popular prompting techniques, such as Chain-of-Thought, do not achieve much performance gains on SCITAB. Our analysis uncovers several unique challenges posed by SCITAB, including table grounding, claim ambiguity, and compositional reasoning. Our codes and data are publicly available at https://github.com/XinyuanLu00/SciTab.Comment: Accepted at EMNLP 2023 (main conference, long paper
    corecore