18 research outputs found

    How will alcohol sales in the UK be affected if drinkers follow government guidelines?

    Get PDF
    Aims: The proportion of alcohol consumption that is above government guidelines ('risky drinking') has been estimated in several countries, suggesting that reductions in risky drinking would lead to significant declines in total alcohol consumption. However, this has not previously been conducted transparently in the UK. Furthermore, existing studies have under-explored the importance of several methodological decisions, as well as not closely examining the meaning of these figures for debates on 'corporate social responsibility' (CSR). Methods: Secondary analysis of the amount of alcohol consumption above various government guidelines in four British datasets for 2000-2002: the National Diet and Nutrition Survey; the General Household Survey; Smoking, Drinking and Drug Use among Young People; and the March 2002 ONS Omnibus Survey. Results: Risky drinking accounts for 55-82% of the total consumption by 18- to 64-year olds, depending on the definition of risky drinking used. If only alcohol above the government guidelines is counted, this falls to 22-475. Consumption by underage drinkers accounts for 4.5% of the total consumption, while consumption by drink-drivers accounts for 0.5-8.0% depending on the assumptions made. Conclusions: Methodologically, the study shows that at least two decisions have considerable importance: the definition of risky drinking used and whether we count all drinking (as in most previous studies) or only drinking above guidelines. Substantively, these studies do not directly show that drink companies' profitability would be affected by declines in risky drinking. Nevertheless, they are valuable for present debate in themselves and form the basis of a more complex analysis of alcohol CSR

    Copyright Statement Disclaimer Privacy Statement

    No full text
    In this issue we take a look back to a completed and very successful SCC/MLA Conference, as well as taking a look forward to new resources and other opportunities the new year will present. UAMS Library News Editors

    QONNX: Representing Arbitrary-Precision Quantized Neural Networks

    No full text
    We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel higher-level ONNX format called quantized ONNX (QONNX) that introduces three new operators -- Quant, BipolarQuant, and Trunc -- in order to represent uniform quantization. By keeping the QONNX IR high-level and flexible, we enable targeting a wider variety of platforms. We also present utilities for working with QONNX, as well as examples of its usage in the FINN and hls4ml toolchains. Finally, we introduce the QONNX model zoo to share low-precision quantized neural networks.We present extensions to the Open Neural Network Exchange (ONNX) intermediate representation format to represent arbitrary-precision quantized neural networks. We first introduce support for low precision quantization in existing ONNX-based quantization formats by leveraging integer clipping, resulting in two new backward-compatible variants: the quantized operator format with clipping and quantize-clip-dequantize (QCDQ) format. We then introduce a novel higher-level ONNX format called quantized ONNX (QONNX) that introduces three new operators -- Quant, BipolarQuant, and Trunc -- in order to represent uniform quantization. By keeping the QONNX IR high-level and flexible, we enable targeting a wider variety of platforms. We also present utilities for working with QONNX, as well as examples of its usage in the FINN and hls4ml toolchains. Finally, we introduce the QONNX model zoo to share low-precision quantized neural networks

    Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark

    No full text
    We present our development experience and recent results for the MLPerf Tiny Inference Benchmark on field-programmable gate array (FPGA) platforms. We use the open-source hls4ml and FINN workflows, which aim to democratize AI-hardware codesign of optimized neural networks on FPGAs. We present the design and implementation process for the keyword spotting, anomaly detection, and image classification benchmark tasks. The resulting hardware implementations are quantized, configurable, spatial dataflow architectures tailored for speed and efficiency and introduce new generic optimizations and common workflows developed as a part of this work. The full workflow is presented from quantization-aware training to FPGA implementation. The solutions are deployed on system-on-chip (Pynq-Z2) and pure FPGA (Arty A7-100T) platforms. The resulting submissions achieve latencies as low as 20 Ό\mus and energy consumption as low as 30 Ό\muJ per inference. We demonstrate how emerging ML benchmarks on heterogeneous hardware platforms can catalyze collaboration and the development of new techniques and more accessible tools
    corecore