1,060,440 research outputs found

    Evaluation of Hashing Methods Performance on Binary Feature Descriptors

    Full text link
    In this paper we evaluate performance of data-dependent hashing methods on binary data. The goal is to find a hashing method that can effectively produce lower dimensional binary representation of 512-bit FREAK descriptors. A representative sample of recent unsupervised, semi-supervised and supervised hashing methods was experimentally evaluated on large datasets of labelled binary FREAK feature descriptors

    New code for equilibriums and quasiequilibrium initial data of compact objects. II. Convergence tests and comparisons of binary black hole initial data

    Full text link
    COCAL is a code for computing equilibriums or quasiequilibrium initial data of single or binary compact objects based on finite difference methods. We present the results of supplementary convergence tests of COCAL code using time symmetric binary black hole data (Brill-Lindquist solution). Then, we compare the initial data of binary black holes on the conformally flat spatial slice obtained from COCAL and KADATH, where KADATH is a library for solving a wide class of problems in theoretical physics including relativistic compact objects with spectral methods. Data calculated from the two codes converge nicely towards each other, for close as well as largely separated circular orbits of binary black holes. Finally, as an example, a sequence of equal mass binary black hole initial data with corotating spins is calculated and compared with data in the literature.Comment: 9 pages, PRD in pres

    Rapid method for interconversion of binary and decimal numbers

    Get PDF
    Decoding tree consisting of 40-bit semiconductor read-only memories interconverts binary and decimal numbers 50 to 100 times faster than current methods. Decimal-to-binary conversion algorithm is based on a divided-by-2 iterative equation, binary-to-decimal conversion algorithm utilizes multiplied-by-2 iterative equation

    ReBNet: Residual Binarized Neural Network

    Full text link
    This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.Comment: To Appear In The 26th IEEE International Symposium on Field-Programmable Custom Computing Machine

    Convex Optimization for Binary Classifier Aggregation in Multiclass Problems

    Full text link
    Multiclass problems are often decomposed into multiple binary problems that are solved by individual binary classifiers whose results are integrated into a final answer. Various methods, including all-pairs (APs), one-versus-all (OVA), and error correcting output code (ECOC), have been studied, to decompose multiclass problems into binary problems. However, little study has been made to optimally aggregate binary problems to determine a final answer to the multiclass problem. In this paper we present a convex optimization method for an optimal aggregation of binary classifiers to estimate class membership probabilities in multiclass problems. We model the class membership probability as a softmax function which takes a conic combination of discrepancies induced by individual binary classifiers, as an input. With this model, we formulate the regularized maximum likelihood estimation as a convex optimization problem, which is solved by the primal-dual interior point method. Connections of our method to large margin classifiers are presented, showing that the large margin formulation can be considered as a limiting case of our convex formulation. Numerical experiments on synthetic and real-world data sets demonstrate that our method outperforms existing aggregation methods as well as direct methods, in terms of the classification accuracy and the quality of class membership probability estimates.Comment: Appeared in Proceedings of the 2014 SIAM International Conference on Data Mining (SDM 2014

    Recovery of binary sparse signals from compressed linear measurements via polynomial optimization

    Get PDF
    The recovery of signals with finite-valued components from few linear measurements is a problem with widespread applications and interesting mathematical characteristics. In the compressed sensing framework, tailored methods have been recently proposed to deal with the case of finite-valued sparse signals. In this work, we focus on binary sparse signals and we propose a novel formulation, based on polynomial optimization. This approach is analyzed and compared to the state-of-the-art binary compressed sensing methods
    • …
    corecore