464 research outputs found

    Exergy analysis of porous cotton fabric drying process during the domestic air vented dryer

    Get PDF
    This study reveals energy and exergy efficiencies of the fabric drying processes during air vented dryer. Exergy models of thedrying processes have been formed and each stage is examined in terms of exergetic parameters. Additionally, parametricstudies, including the exergy destruction rates, exergy efficiencies, and exergy loss ratios of the system and its components,have been investigated under various operating conditions. The results indicate that the exergy efficiency increases with theincrease in drying rate. Heater of dryer is the highest exergy destruction component of the whole dryer and its powersignificantly affects exergy destruction of whole drying process; while fan and motor of driving drum are lower exergydestruction component of dryer. Use of staged heating model of adjusting heater power based on drying period is found to be aneffective method to reduce the exergy destruction rate of dryer and fabric damage caused by over-drying. Specifically, exergyefficiency of dryer can be improved by increasing the heater power during the warm up and the constant rate period, or bydecreasing the drying-power during the falling rate and the blow-air period. The findings are found to be useful to systemdesign and performance optimization of domestic dryer in term of reducing irreversibility of the drying system.

    Diversity Oriented Synthesis of Furan Epoxide

    Get PDF
    In this project, we are doing the Diversity Oriented Synthesis of Furan Epoxide. There are two main reactions we are trying to accomplish with the epoxide which are the Achmatowicz reaction with the furan ring, and the epoxide ring opening reaction with amine. During this summer, we are able to make the amino alcohol and the Achmatowicz product has been made from the previous semester. We were also trying to get Achmatowicz reaction happening with the amino alcohol product. However, the result doesn\u27t show the expected structure and still needs further research. Besides organic chemistry synthesis, we also tested the bioactivity of our amino alcohol product since it was similar to another bioactive structure. We did both the Brian Shrimp Lethality Assay with our amino alcohol product

    EAGLE: Speculative Sampling Requires Rethinking Feature Uncertainty

    Full text link
    Autoregressive decoding makes the inference of Large Language Models (LLMs) time-consuming. In this paper, we reconsider speculative sampling and derive two key observations. Firstly, autoregression at the feature (second-to-top-layer) level is more straightforward than at the token level. Secondly, the inherent uncertainty in feature (second-to-top-layer) level autoregression constrains its performance. Based on these insights, we introduce EAGLE (Extrapolation Algorithm for Greater Language-model Efficiency), a simple yet highly efficient speculative sampling framework. By incorporating a token sequence advanced by one time step, EAGLE effectively resolves the uncertainty, enabling precise second-to-top-layer feature prediction with minimal overhead. We conducted comprehensive evaluations of EAGLE, including all models from the Vicuna and LLaMA2-Chat series, the MoE model Mixtral 8x7B Instruct, and tasks in dialogue, code generation, mathematical reasoning, and instruction following. For LLaMA2-Chat 70B, EAGLE achieved a latency speedup ratio of 2.7x-3.5x, doubled throughput, while maintaining the distribution of the generated text

    Evolution-Operator-Based Single-Step Method for Image Processing

    Get PDF
    This work proposes an evolution-operator-based single-time-step method for image and signal processing. The key component of the proposed method is a local spectral evolution kernel (LSEK) that analytically integrates a class of evolution partial differential equations (PDEs). From the point of view PDEs, the LSEK provides the analytical solution in a single time step, and is of spectral accuracy, free of instability constraint. From the point of image/signal processing, the LSEK gives rise to a family of lowpass filters. These filters contain controllable time delay and amplitude scaling. The new evolution operator-based method is constructed by pointwise adaptation of anisotropy to the coefficients of the LSEK. The Perona-Malik-type of anisotropic diffusion schemes is incorporated in the LSEK for image denoising. A forward-backward diffusion process is adopted to the LSEK for image deblurring or sharpening. A coupled PDE system is modified for image edge detection. The resulting image edge is utilized for image enhancement. Extensive computer experiments are carried out to demonstrate the performance of the proposed method. The major advantages of the proposed method are its single-step solution and readiness for multidimensional data analysis

    RAIN: Your Language Models Can Align Themselves without Finetuning

    Full text link
    Large language models (LLMs) often demonstrate inconsistencies with human preferences. Previous research gathered human preference data and then aligned the pre-trained models using reinforcement learning or instruction tuning, the so-called finetuning step. In contrast, aligning frozen LLMs without any extra data is more appealing. This work explores the potential of the latter setting. We discover that by integrating self-evaluation and rewind mechanisms, unaligned LLMs can directly produce responses consistent with human preferences via self-boosting. We introduce a novel inference method, Rewindable Auto-regressive INference (RAIN), that allows pre-trained LLMs to evaluate their own generation and use the evaluation results to guide backward rewind and forward generation for AI safety. Notably, RAIN operates without the need of extra data for model alignment and abstains from any training, gradient computation, or parameter updates; during the self-evaluation phase, the model receives guidance on which human preference to align with through a fixed-template prompt, eliminating the need to modify the initial prompt. Experimental results evaluated by GPT-4 and humans demonstrate the effectiveness of RAIN: on the HH dataset, RAIN improves the harmlessness rate of LLaMA 30B over vanilla inference from 82% to 97%, while maintaining the helpfulness rate. Under the leading adversarial attack llm-attacks on Vicuna 33B, RAIN establishes a new defense baseline by reducing the attack success rate from 94% to 19%
    corecore