932 research outputs found

    Framework for Evaluating Sustainability of Transport System in Megalopolis and its Application

    Get PDF
    AbstractIt has been acknowledged that megalopolises are playing a leading role in the processes of both economic development and culture change. Thereupon, the new emphases on sustainability of transportation system in megalopolis are creating new demands for adequate approach to measure its performance and diagnosis potential drawbacks. By examining the descriptions of sustainable transport system as well as its evaluating approach, a framework with the general applicability and easily accessible data resource for evaluating sustainability of transport system in megalopolis is developed based on nature of regional structure and the feature transport demand in megalopolis. The proposed framework is applied in the analysis and comparison of Jing-Jin-Ji and Yangtze River Delta.

    Deep Feature Screening: Feature Selection for Ultra High-Dimensional Data via Deep Neural Networks

    Full text link
    The applications of traditional statistical feature selection methods to high-dimension, low sample-size data often struggle and encounter challenging problems, such as overfitting, curse of dimensionality, computational infeasibility, and strong model assumption. In this paper, we propose a novel two-step nonparametric approach called Deep Feature Screening (DeepFS) that can overcome these problems and identify significant features with high precision for ultra high-dimensional, low-sample-size data. This approach first extracts a low-dimensional representation of input data and then applies feature screening based on multivariate rank distance correlation recently developed by Deb and Sen (2021). This approach combines the strengths of both deep neural networks and feature screening, and thereby has the following appealing features in addition to its ability of handling ultra high-dimensional data with small number of samples: (1) it is model free and distribution free; (2) it can be used for both supervised and unsupervised feature selection; and (3) it is capable of recovering the original input data. The superiority of DeepFS is demonstrated via extensive simulation studies and real data analyses

    The hairiness of worsted wool and cashmere yarns and the impact of fiber curvature on hairiness

    Full text link
    In this study, a range of carefully selected wool and cashmere yarns as well as their blends were used to examine the effects of fiber curvature and blend ratio on yarn hairiness. The results indicate that yarns spun from wool fibers with a higher curvature have lower yarn hairiness than yarns spun from similar wool of a lower curvature. For blend yarns made from wool and cashmere of similar diameter, yarn hairiness increases with the increase in the cashmere content in the yarn. This is probably due to the presence of increased proportion of the shorter cashmere fibers in the surface regions of the yarn, leading to increased yarn hairiness. A modified hairiness composition model is used to explain these results and the likely origin of leading and trailing hairs. This model highlights the importance of yarn surface composition on yarn hairiness.<br /

    AMG: Automated Efficient Approximate Multiplier Generator for FPGAs via Bayesian Optimization

    Full text link
    Approximate computing is a promising approach to reduce the power, delay, and area in hardware design for many error-resilient applications such as machine learning (ML) and digital signal processing (DSP) systems, in which multipliers usually are key arithmetic units. Due to the underlying architectural differences between ASICs and FPGAs, existing ASIC-based approximate multipliers do not offer symmetrical gains when they are implemented by FPGA resources. In this paper, we propose AMG, an open-source automated approximate multiplier generator for FPGAs driven by Bayesian optimization (BO) with parallel evaluation. The proposed method simplifies the exact half adders (HAs) for the initial partial product (PP) compression in a multiplier while preserving coarse-grained additions for the following accumulation. The generated multipliers can be effectively mapped to lookup tables (LUTs) and carry chains provided by modern FPGAs, reducing hardware costs with acceptable errors. Compared with 1167 multipliers from previous works, our generated multipliers can form a Pareto front with 28.70%-38.47% improvements in terms of the product of hardware cost and error on average. All source codes, reproduced multipliers, and our generated multipliers are available at https://github.com/phyzhenli/AMG.Comment: 7 pages, 2023 IEEE International Conference on Field-Programmable Technology (ICFPT
    • ā€¦
    corecore