1,012 research outputs found

    Determining the cumulative energy demand and greenhouse gas emission of Swedish wheat flour : a life cycle analysis approach

    Get PDF
    Food production brought a tremendous impact on human society. However, there has been a lot of debate between organic and conventional farming. Producing enough food by maximizing the yield to feed the growing population has been the main goal of agriculture nowadays. This goal is achieved by applying different kinds of synthetic chemicals to improve the performance of crops in conventional farming. However, this leads to different environmental problems like soil degradation, loss of biodiversity, and disruption of healthy ecosystems. As a result, there is a growing demand for information on the environmental impact of food products from consumers and food supply chain participants. The main objective of the current study is to investigate the environmental impacts of organic and conventional wheat flour produced and supplied in Sweden, using life cycle analysis (LCA) and focusing on the global warming potential (GWP) and cumulative energy demand (CED). A cradle-to-gate LCA with the functional unit (FU) of 1 ton of wheat flour at the gate of the milling facility is conducted in this study. The results of the present study show that in terms of GWP, conventional systems have a higher emission compared to organic systems. As to energy demand, the two systems have almost similar results. The GWP for the conventional systems is 356 CO2-eq kg/FU while it is 249 CO2-eq kg/FU for the organic systems. The CED for the conventional system is 4025 MJ/FU while it is 3983 MJ/FU for the organic system. The farm activity is the hot spot stage for both conventional and organic systems. Overall, when considering environmental aspects, wheat flour from organic farming in Sweden is more sustainable than wheat flour from conventional farming systems. Increasing the yield for organic farming could improve further the environmental sustainability of organic wheat flour

    Adversarial Weight Perturbation Improves Generalization in Graph Neural Network

    Full text link
    A lot of theoretical and empirical evidence shows that the flatter local minima tend to improve generalization. Adversarial Weight Perturbation (AWP) is an emerging technique to efficiently and effectively find such minima. In AWP we minimize the loss w.r.t. a bounded worst-case perturbation of the model parameters thereby favoring local minima with a small loss in a neighborhood around them. The benefits of AWP, and more generally the connections between flatness and generalization, have been extensively studied for i.i.d. data such as images. In this paper, we extensively study this phenomenon for graph data. Along the way, we first derive a generalization bound for non-i.i.d. node classification tasks. Then we identify a vanishing-gradient issue with all existing formulations of AWP and we propose a new Weighted Truncated AWP (WT-AWP) to alleviate this issue. We show that regularizing graph neural networks with WT-AWP consistently improves both natural and robust generalization across many different graph learning tasks and models.Comment: AAAI 202

    Long-term air pollution exposure impact on COVID-19 morbidity in China

    Get PDF
    Although previous studies have proved the association between air pollution and respiratory viral infection, given the relatively short history of human infection with the severe acute respiratory syndrome coronavirus (SARS-CoV-2), the linkage between long-term air pollution exposure and the morbidity of 2019 novel coronavirus (COVID-19) pneumonia remains poorly understood. To fill this gap, this study investigates the influences of particulate matters (PM2.5 and PM10), nitrogen dioxide (NO2), ozone (O3), sulfur dioxide (SO2) and carbon monoxide (CO) on COVID-19 incidence rate based on the prefecture-level morbidity count and air quality data in China. Annual means for ambient PM2.5, PM10, SO2, NO2, CO and O3 concentrations in each prefecture are used to estimate the populationā€™s exposure. We leverage identical statistical methods, i.e., Spearmanā€™s rank correlation and negative binomial regression model, to demonstrate that people who are chronically exposed to ambient air pollution are more likely to be infected by COVID-19. Our statistical analysis indicates that a 1 Ī¼g m-3 increase of PM2.5, PM10, NO2 and O3 can result in 1.95% (95% CI: 0.83 to 3.08% ), 0.55% (95% CI: -0.05 to 1.17% ), 4.63% (95% CI: 3.07 to 6.22% ) rise and 2.05% (95% CI: 0.51 to 3.59 % ) decrease of COVID-19 morbidity. However, we observe nonsignificant association with long-term SO2 and CO exposure to COVID-19 morbidity in this study. Our resultsā€™ robustness is examined based on sensitivity analyses that adjust for a wide range of confounders, including socio-economic, demographic, weather, healthcare, and mobility-related variables. We acknowledge that more laboratory results are required to prove the etiology of these associations

    Unbiased Watermark for Large Language Models

    Full text link
    The recent advancements in large language models (LLMs) have sparked a growing apprehension regarding the potential misuse. One approach to mitigating this risk is to incorporate watermarking techniques into LLMs, allowing for the tracking and attribution of model outputs. This study examines a crucial aspect of watermarking: how significantly watermarks impact the quality of model-generated outputs. Previous studies have suggested a trade-off between watermark strength and output quality. However, our research demonstrates that it is possible to integrate watermarks without affecting the output probability distribution with appropriate implementation. We refer to this type of watermark as an unbiased watermark. This has significant implications for the use of LLMs, as it becomes impossible for users to discern whether a service provider has incorporated watermarks or not. Furthermore, the presence of watermarks does not compromise the performance of the model in downstream tasks, ensuring that the overall utility of the language model is preserved. Our findings contribute to the ongoing discussion around responsible AI development, suggesting that unbiased watermarks can serve as an effective means of tracking and attributing model outputs without sacrificing output quality

    Towards Robust Dataset Learning

    Full text link
    Adversarial training has been actively studied in recent computer vision research to improve the robustness of models. However, due to the huge computational cost of generating adversarial samples, adversarial training methods are often slow. In this paper, we study the problem of learning a robust dataset such that any classifier naturally trained on the dataset is adversarially robust. Such a dataset benefits the downstream tasks as natural training is much faster than adversarial training, and demonstrates that the desired property of robustness is transferable between models and data. In this work, we propose a principled, tri-level optimization to formulate the robust dataset learning problem. We show that, under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset. Extensive experiments on MNIST, CIFAR10, and TinyImageNet demostrate the effectiveness of our algorithm with different network initializations and architectures

    Towards architecture-level middleware-enabled exception handling of component-based systems

    Full text link
    Exception handling is a practical and important way to improve the availability and reliability of a component-based system. The classical code-level exception handling approach is usually applied to the inside of a component, while some exceptions can only or properly be handled outside of the components. In this paper, we propose a middleware-enabled approach for exception handling at architecture level. Developers specify what exceptions should be handled and how to handle them with the support of middleware in an exception handling model, which is complementary to software architecture of the target system. This model will be interpreted at runtime by a middleware-enabled exception handling framework, which is responsible for catching and handling the specified exceptions mainly based on the common mechanisms provided by the middleware. The approach is demonstrated in JEE application servers and benchmarks. ? 2011 ACM.EI

    A novel statistical method for long-term coronavirus modelling

    Get PDF
    Background: Novel coronavirus disease has been recently a concern for worldwide public health. To determine epidemic rate probability at any time in any region of interest, one needs efficient bio-system reliability approach, particularly suitable for multi-regional environmental and health systems, observed over a sufficient period of time, resulting in a reliable long-term forecast of novel coronavirus infection rate. Traditional statistical methods dealing with temporal observations of multi-regional processes do not have the multi-dimensionality advantage, that suggested methodology offers, namely dealing efficiently with multiple regions at the same time and accounting for cross-correlations between different regional observations. Methods: Modern multi-dimensional novel statistical method was directly applied to raw clinical data, able to deal with territorial mapping. Novel reliability method based on statistical extreme value theory has been suggested to deal with challenging epidemic forecast. Authors used MATLAB optimization software. Results: This paper described a novel bio-system reliability approach, particularly suitable for multi-country environmental and health systems, observed over a sufficient period of time, resulting in a reliable long-term forecast of extreme novel coronavirus death rate probability. Namely, accurate maximum recorded patient numbers are predicted for the years to come for the analyzed provinces. Conclusions: The suggested method performed well by supplying not only an estimate but 95% confidence interval as well. Note that suggested methodology is not limited to any specific epidemics or any specific terrain, namely its truly general. The only assumption and limitation is bio-system stationarity, alternatively trend analysis should be performed first. The suggested methodology can be used in various public health applications, based on their clinical survey data.publishedVersio

    PromptTTS: Controllable Text-to-Speech with Text Descriptions

    Full text link
    Using a text description as prompt to guide the generation of text or images (e.g., GPT-3 or DALLE-2) has drawn wide attention recently. Beyond text and image generation, in this work, we explore the possibility of utilizing text descriptions to guide speech synthesis. Thus, we develop a text-to-speech (TTS) system (dubbed as PromptTTS) that takes a prompt with both style and content descriptions as input to synthesize the corresponding speech. Specifically, PromptTTS consists of a style encoder and a content encoder to extract the corresponding representations from the prompt, and a speech decoder to synthesize speech according to the extracted style and content representations. Compared with previous works in controllable TTS that require users to have acoustic knowledge to understand style factors such as prosody and pitch, PromptTTS is more user-friendly since text descriptions are a more natural way to express speech style (e.g., ''A lady whispers to her friend slowly''). Given that there is no TTS dataset with prompts, to benchmark the task of PromptTTS, we construct and release a dataset containing prompts with style and content information and the corresponding speech. Experiments show that PromptTTS can generate speech with precise style control and high speech quality. Audio samples and our dataset are publicly available.Comment: Submitted to ICASSP 202

    Regulation of the Late Onset alzheimerā€™s Disease Associated HLA-DQA1/DRB1 Expression

    Get PDF
    (Genome-wide Association Studies) GWAS have identified āˆ¼42 late-onset Alzheimerā€™s disease (LOAD)-associated loci, each of which contains multiple single nucleotide polymorphisms (SNPs) in linkage disequilibrium (LD) and most of these SNPs are in the non-coding region of human genome. However, how these SNPs regulate risk gene expression remains unknown. In this work, by using a set of novel techniques, we identified 6 functional SNPs (fSNPs) rs9271198, rs9271200, rs9281945, rs9271243, and rs9271247 on the LOAD-associated HLA-DRB1/DQA1 locus and 42 proteins specifically binding to five of these 6 fSNPs. As a proof of evidence, we verified the allele-specific binding of GATA2 and GATA3, ELAVL1 and HNRNPA0, ILF2 and ILF3, NFIB and NFIC, as well as CUX1 to these five fSNPs, respectively. Moreover, we demonstrate that all these nine proteins regulate the expression of both HLA-DQA1 and HLA-DRB1 in human microglial cells. The contribution of HLA class II to the susceptibility of LOAD is discussed
    • ā€¦
    corecore