21,066 research outputs found

    GM-Net: Learning Features with More Efficiency

    Full text link
    Deep Convolutional Neural Networks (CNNs) are capable of learning unprecedentedly effective features from images. Some researchers have struggled to enhance the parameters' efficiency using grouped convolution. However, the relation between the optimal number of convolutional groups and the recognition performance remains an open problem. In this paper, we propose a series of Basic Units (BUs) and a two-level merging strategy to construct deep CNNs, referred to as a joint Grouped Merging Net (GM-Net), which can produce joint grouped and reused deep features while maintaining the feature discriminability for classification tasks. Our GM-Net architectures with the proposed BU_A (dense connection) and BU_B (straight mapping) lead to significant reduction in the number of network parameters and obtain performance improvement in image classification tasks. Extensive experiments are conducted to validate the superior performance of the GM-Net than the state-of-the-arts on the benchmark datasets, e.g., MNIST, CIFAR-10, CIFAR-100 and SVHN.Comment: 6 Pages, 5 figure

    A study on coarse-grained placement and routing for low-power FPGA architecture

    Get PDF
    制度:新 ; 報告番号:甲3603号 ; 学位の種類:博士(工学) ; 授与年月日:2012/3/15 ; 早大学位記番号:新595

    B-meson Semi-inclusive Decay to 2+2^{-+} Charmonium in NRQCD and X(3872)

    Full text link
    The semi-inclusive B-meson decay into spin-singlet D-wave 2+2^{-+} charmonium, Bηc2+XB\to \eta_{c2}+X, is studied in nonrelativistic QCD (NRQCD). Both color-singlet and color-octet contributions are calculated at next-to-leading order (NLO) in the strong coupling constant αs\alpha_s. The non-perturbative long-distance matrix elements are evaluated using operator evolution equations. It is found that the color-singlet 1D2^1D_2 contribution is tiny, while the color-octet channels make dominant contributions. The estimated branching ratio B(Bηc2+X)B(B\to \eta_{c2}+X) is about 0.41×1040.41\,\times10^{-4} in the Naive Dimensional Regularization (NDR) scheme and 1.24×1041.24\,\times10^{-4} in the t'Hooft-Veltman (HV) scheme, with renormalization scale μ=mb=4.8\mu=m_b=4.8\,GeV. The scheme-sensitivity of these numerical results is due to cancelation between 1S0[8]{}^1S_0^{[8]} and 1P1[8]{}^1P_1^{[8]} contributions. The μ\mu-dependence curves of NLO branching ratios in both schemes are also shown, with μ\mu varying from mb2\frac{m_b}{2} to 2mb2m_b and the NRQCD factorization or renormalization scale μΛ\mu_{\Lambda} taken to be 2mc2m_c. Comparison of the estimated branching ratio of Bηc2+XB\to \eta_{c2}+X with the observed branching ratio of BX(3872)+KB \to X(3872)+K may lead to the conclusion that X(3872) is unlikely to be the 2+2^{-+} charmonium state ηc2\eta_{c2}.Comment: Version published in PRD, references added, 26 pages, 9 figure

    CleanML: A Study for Evaluating the Impact of Data Cleaning on ML Classification Tasks

    Full text link
    Data quality affects machine learning (ML) model performances, and data scientists spend considerable amount of time on data cleaning before model training. However, to date, there does not exist a rigorous study on how exactly cleaning affects ML -- ML community usually focuses on developing ML algorithms that are robust to some particular noise types of certain distributions, while database (DB) community has been mostly studying the problem of data cleaning alone without considering how data is consumed by downstream ML analytics. We propose a CleanML study that systematically investigates the impact of data cleaning on ML classification tasks. The open-source and extensible CleanML study currently includes 14 real-world datasets with real errors, five common error types, seven different ML models, and multiple cleaning algorithms for each error type (including both commonly used algorithms in practice as well as state-of-the-art solutions in academic literature). We control the randomness in ML experiments using statistical hypothesis testing, and we also control false discovery rate in our experiments using the Benjamini-Yekutieli (BY) procedure. We analyze the results in a systematic way to derive many interesting and nontrivial observations. We also put forward multiple research directions for researchers.Comment: published in ICDE 202
    corecore