88 research outputs found

    Compressed-domain transcoding of H.264/AVC and SVC video streams

    Get PDF

    SwiftTron: An Efficient Hardware Accelerator for Quantized Transformers

    Full text link
    Transformers' compute-intensive operations pose enormous challenges for their deployment in resource-constrained EdgeAI / tinyML devices. As an established neural network compression technique, quantization reduces the hardware computational and memory resources. In particular, fixed-point quantization is desirable to ease the computations using lightweight blocks, like adders and multipliers, of the underlying hardware. However, deploying fully-quantized Transformers on existing general-purpose hardware, generic AI accelerators, or specialized architectures for Transformers with floating-point units might be infeasible and/or inefficient. Towards this, we propose SwiftTron, an efficient specialized hardware accelerator designed for Quantized Transformers. SwiftTron supports the execution of different types of Transformers' operations (like Attention, Softmax, GELU, and Layer Normalization) and accounts for diverse scaling factors to perform correct computations. We synthesize the complete SwiftTron architecture in a 6565 nm CMOS technology with the ASIC design flow. Our Accelerator executes the RoBERTa-base model in 1.83 ns, while consuming 33.64 mW power, and occupying an area of 273 mm^2. To ease the reproducibility, the RTL of our SwiftTron architecture is released at https://github.com/albertomarchisio/SwiftTron.Comment: To appear at the 2023 International Joint Conference on Neural Networks (IJCNN), Queensland, Australia, June 202

    Application of remote sensing to state and regional problems

    Get PDF
    The methods and procedures used, accomplishments, current status, and future plans are discussed for each of the following applications of LANDSAT in Mississippi: (1) land use planning in Lowndes County; (2) strip mine inventory and reclamation; (3) white-tailed deer habitat evaluation; (4) remote sensing data analysis support systems; (5) discrimination of unique forest habitats in potential lignite areas; (6) changes in gravel operations; and (7) determining freshwater wetlands for inventory and monitoring. The documentation of all existing software and the integration of the image analysis and data base software into a single package are now considered very high priority items

    Observability of Quantum Criticality and a Continuous Supersolid in Atomic Gases

    Full text link
    We analyze the Bose-Hubbard model with a three-body hardcore constraint by mapping the system to a theory of two coupled bosonic degrees of freedom. We find striking features that could be observable in experiments, including a quantum Ising critical point on the transition from atomic to dimer superfluidity at unit filling, and a continuous supersolid phase for strongly bound dimers.Comment: 4 pages, 2 figures, published version (Editor's suggestion

    A Study of Feature Extraction Using Divergence Analysis of Texture Features

    Get PDF
    An empirical study of texture analysis for feature extraction and classification of high spatial resolution remotely sensed imagery (10 meters) is presented in terms of specific land cover types. The principal method examined is the use of spatial gray tone dependence (SGTD). The SGTD method reduces the gray levels within a moving window into a two-dimensional spatial gray tone dependence matrix which can be interpreted as a probability matrix of gray tone pairs. Haralick et al (1973) used a number of information theory measures to extract texture features from these matrices, including angular second moment (inertia), correlation, entropy, homogeneity, and energy. The derivation of the SGTD matrix is a function of: (1) the number of gray tones in an image; (2) the angle along which the frequency of SGTD is calculated; (3) the size of the moving window; and (4) the distance between gray tone pairs. The first three parameters were varied and tested on a 10 meter resolution panchromatic image of Maryville, Tennessee using the five SGTD measures. A transformed divergence measure was used to determine the statistical separability between four land cover categories forest, new residential, old residential, and industrial for each variation in texture parameters

    Pixel-Level Calibration in the Kepler Science Operations Center Pipeline

    Get PDF
    We present an overview of the pixel-level calibration of flight data from the Kepler Mission performed within the Kepler Science Operations Center Science Processing Pipeline. This article describes the calibration (CAL) module, which operates on original spacecraft data to remove instrument effects and other artifacts that pollute the data. Traditional CCD data reduction is performed (removal of instrument/detector effects such as bias and dark current), in addition to pixel-level calibration (correcting for cosmic rays and variations in pixel sensitivity), Kepler-specific corrections (removing smear signals which result from the lack of a shutter on the photometer and correcting for distortions induced by the readout electronics), and additional operations that are needed due to the complexity and large volume of flight data. CAL operates on long (~30 min) and short (~1 min) sampled data, as well as full-frame images, and produces calibrated pixel flux time series, uncertainties, and other metrics that are used in subsequent Pipeline modules. The raw and calibrated data are also archived in the Multi-mission Archive at Space Telescope at the Space Telescope Science Institute for use by the astronomical community

    RF Replay System for Narrowband GNSS IF Signals

    Full text link
    • …
    corecore