109 research outputs found

    EFFECT OF ROTATIONAL GRAZING ON PLANT AND ANIMAL PRODUCTION

    Get PDF
    It is a common understanding that rotational cattle grazing provides better yields than continuous grazing, but a quantitative analysis is lacking in agricultural literature. In rotational grazing, cattle periodically move among paddocks in contrast to continuous grazing, in which the cattle graze on a single plot for the entire grazing season. We construct a differential equation model of vegetation grazing on a fixed area to show that production yields and stockpiled forage are greater for rotational grazing than continuous grazing. Our results show that both the number of cattle per acre and stockpiled forage increase for many rotational configurations

    Resonant Anomaly Detection with Multiple Reference Datasets

    Full text link
    An important class of techniques for resonant anomaly detection in high energy physics builds models that can distinguish between reference and target datasets, where only the latter has appreciable signal. Such techniques, including Classification Without Labels (CWoLa) and Simulation Assisted Likelihood-free Anomaly Detection (SALAD) rely on a single reference dataset. They cannot take advantage of commonly-available multiple datasets and thus cannot fully exploit available information. In this work, we propose generalizations of CWoLa and SALAD for settings where multiple reference datasets are available, building on weak supervision techniques. We demonstrate improved performance in a number of settings with realistic and synthetic data. As an added benefit, our generalizations enable us to provide finite-sample guarantees, improving on existing asymptotic analyses

    Experimental heat transfer investigations of a double pipe U-tube heat exchanger equipped with twisted tape and cut twisted tape internals

    Get PDF
    For several decades, the use of heat exchangers for both heating and cooling applications has been established in industries ranging from process to space heating. Out of the various types of heat exchangers, U-tube heat exchangers are preferred owing to their abilities to handle larger flowrates and their simplicity in construction. U-tube exchangers are often equipped with innards of various forms which facilitate higher heat transfer rates and thermal efficiencies. Although higher heat transfer rates have been established with the addition of internals, there is a lack of coherence on the underlying complex physical phenomena such as heat transfer boundary layers and turbulence characteristics. In the current study, the effect of twisted and cut-twisted tape inserts on heat transfer enhancement and pressure drop in a counter flow double pipe U-tube heat exchanger has been investigated using experimental approach. This has been compared with bare tubes in the absence of internals. Physical parameters such as heat transfer rate, pressure drop, Nusselt number are determined for a range of mass flow rates

    Childhood Obesity: Time to Obsess

    Get PDF
    ABSTRACT Overweight or Obesity, defined as abnormal or excessive fat accumulation that presents a risk to health. Years pass, but obesity still remains as major health problem or condition starting from infant to old age individuals. Back to the past if we analyze the reasons for obesity the reasons include intake of more calories, Sedentary Lifestyle, insufficient sleep and rarely obesity gene. People with obesity have the probability of diabetes, heart disease, stroke, arthritis, and some cancers. Present work shows the various paths of obesity from the past, recent advances in the present and future challenges. Youth stoutness has dramatically multiplied in kids and quadrupled in teenagers in the previous 30 years

    Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot Classification

    Full text link
    Recent work has shown that language models' (LMs) prompt-based learning capabilities make them well suited for automating data labeling in domains where manual annotation is expensive. The challenge is that while writing an initial prompt is cheap, improving a prompt is costly -- practitioners often require significant labeled data in order to evaluate the impact of prompt modifications. Our work asks whether it is possible to improve prompt-based learning without additional labeled data. We approach this problem by attempting to modify the predictions of a prompt, rather than the prompt itself. Our intuition is that accurate predictions should also be consistent: samples which are similar under some feature representation should receive the same prompt prediction. We propose Embroid, a method which computes multiple representations of a dataset under different embedding functions, and uses the consistency between the LM predictions for neighboring samples to identify mispredictions. Embroid then uses these neighborhoods to create additional predictions for each sample, and combines these predictions with a simple latent variable graphical model in order to generate a final corrected prediction. In addition to providing a theoretical analysis of Embroid, we conduct a rigorous empirical evaluation across six different LMs and up to 95 different tasks. We find that (1) Embroid substantially improves performance over original prompts (e.g., by an average of 7.3 points on GPT-JT), (2) also realizes improvements for more sophisticated prompting strategies (e.g., chain-of-thought), and (3) can be specialized to domains like law through the embedding functions.Comment: 38 pages, 22 figures, 8 table

    Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning

    Full text link
    An ideal learned representation should display transferability and robustness. Supervised contrastive learning (SupCon) is a promising method for training accurate models, but produces representations that do not capture these properties due to class collapse -- when all points in a class map to the same representation. Recent work suggests that "spreading out" these representations improves them, but the precise mechanism is poorly understood. We argue that creating spread alone is insufficient for better representations, since spread is invariant to permutations within classes. Instead, both the correct degree of spread and a mechanism for breaking this invariance are necessary. We first prove that adding a weighted class-conditional InfoNCE loss to SupCon controls the degree of spread. Next, we study three mechanisms to break permutation invariance: using a constrained encoder, adding a class-conditional autoencoder, and using data augmentation. We show that the latter two encourage clustering of latent subclasses under more realistic conditions than the former. Using these insights, we show that adding a properly-weighted class-conditional InfoNCE loss and a class-conditional autoencoder to SupCon achieves 11.1 points of lift on coarse-to-fine transfer across 5 standard datasets and 4.7 points on worst-group robustness on 3 datasets, setting state-of-the-art on CelebA by 11.5 points
    • …
    corecore