633 research outputs found

    On the minimal Hamming weight of a multi-base representation

    Get PDF
    CITATION: Krenn, D., Suppakitpaisarn, V. & Wagner, S. 2020. On the minimal Hamming weight of a multi-base representation. Journal of Number Theory, 208:168–179, doi:10.1016/j.jnt.2019.07.023.The original publication is available at https://www.sciencedirect.comGiven a finite set of bases b1, b2, ..., br (integers greater than 1), a multi-base representation of an integer n is a sum with summands dbα1 1 b α2 2 ··· bαr r , where the αj are nonnegative integers and the digits d are taken from a fixed finite set. We consider multi-base representations with at least two bases that are multiplicatively independent. Our main result states that the order of magnitude of the minimal Hamming weight of an integer n, i.e., the minimal number of nonzero summands in a representation of n, is log n/(log log n). This is independent of the number of bases, the bases themselves, and the digit set. For the proof, the existing upper bound for prime bases is generalized to multiplicatively independent bases; for the required analysis of the natural greedy algorithm, an auxiliary result in Diophantine approximation is derived. The lower bound follows by a counting argument and alternatively by using communication complexity; thereby improving the existing bounds and closing the gap in the order of magnitude.Austrian Science Fundhttps://www.sciencedirect.com/science/article/pii/S0022314X19302768Publisher's versio

    Optimal Average Joint Hamming Weight and Minimal Weight Conversion of d Integers

    Get PDF
    In this paper, we propose the minimal joint Hamming weight conversion for any binary expansions of dd integers. With redundant representations, we may represent a number by many expansions, and the minimal joint Hamming weight conversion is the algorithm to select the expansion that has the least joint Hamming weight. As the computation time of the cryptosystem strongly depends on the joint Hamming weight, the conversion can make the cryptosystem faster. Most of existing conversions are limited to some specific representations, and are difficult to apply to other representations. On the other hand, our conversion is applicable to any binary expansions. The proposed can explore the minimal average weights in a class of representation that have not been found. One of the most interesting results is that, for the expansion of integer pairs when the digit set is {0,±1,±3}\{0, \pm 1, \pm 3\}, we show that the minimal average joint Hamming weight is 0.35750.3575. This improves the upper bound value, 0.36160.3616, proposed by Dahmen, Okeya, and Takagi

    Modern Computer Arithmetic (version 0.5.1)

    Full text link
    This is a draft of a book about algorithms for performing arithmetic, and their implementation on modern computers. We are concerned with software more than hardware - we do not cover computer architecture or the design of computer hardware. Instead we focus on algorithms for efficiently performing arithmetic operations such as addition, multiplication and division, and their connections to topics such as modular arithmetic, greatest common divisors, the Fast Fourier Transform (FFT), and the computation of elementary and special functions. The algorithms that we present are mainly intended for arbitrary-precision arithmetic. They are not limited by the computer word size, only by the memory and time available for the computation. We consider both integer and real (floating-point) computations. The book is divided into four main chapters, plus an appendix. Our aim is to present the latest developments in a concise manner. At the same time, we provide a self-contained introduction for the reader who is not an expert in the field, and exercises at the end of each chapter. Chapter titles are: 1, Integer Arithmetic; 2, Modular Arithmetic and the FFT; 3, Floating-Point Arithmetic; 4, Elementary and Special Function Evaluation; 5 (Appendix), Implementations and Pointers. The book also contains a bibliography of 236 entries, index, summary of notation, and summary of complexities.Comment: Preliminary version of a book to be published by Cambridge University Press. xvi+247 pages. Cite as "Modern Computer Arithmetic, Version 0.5.1, 5 March 2010". For further details, updates and errata see http://wwwmaths.anu.edu.au/~brent/pub/pub226.html or http://www.loria.fr/~zimmerma/mca/pub226.htm

    Data Structures & Algorithm Analysis in C++

    Get PDF
    This is the textbook for CSIS 215 at Liberty University.https://digitalcommons.liberty.edu/textbooks/1005/thumbnail.jp

    Domain Adaptation and Domain Generalization with Representation Learning

    No full text
    Machine learning has achieved great successes in the area of computer vision, especially in object recognition or classification. One of the core factors of the successes is the availability of massive labeled image or video data for training, collected manually by human. Labeling source training data, however, can be expensive and time consuming. Furthermore, a large amount of labeled source data may not always guarantee traditional machine learning techniques to generalize well; there is a potential bias or mismatch in the data, i.e., the training data do not represent the target environment. To mitigate the above dataset bias/mismatch, one can consider domain adaptation: utilizing labeled training data and unlabeled target data to develop a well-performing classifier on the target environment. In some cases, however, the unlabeled target data are nonexistent, but multiple labeled sources of data exist. Such situations can be addressed by domain generalization: using multiple source training sets to produce a classifier that generalizes on the unseen target domain. Although several domain adaptation and generalization approaches have been proposed, the domain mismatch in object recognition remains a challenging, open problem – the model performance has yet reached to a satisfactory level in real world applications. The overall goal of this thesis is to progress towards solving dataset bias in visual object recognition through representation learning in the context of domain adaptation and domain generalization. Representation learning is concerned with finding proper data representations or features via learning rather than via engineering by human experts. This thesis proposes several representation learning solutions based on deep learning and kernel methods. This thesis introduces a robust-to-noise deep neural network for handwritten digit classification trained on “clean” images only, which we name Deep Hybrid Network (DHN). DHNs are based on a particular combination of sparse autoencoders and restricted Boltzmann machines. The results show that DHN performs better than the standard deep neural network in recognizing digits with Gaussian and impulse noise, block and border occlusions. This thesis proposes the Domain Adaptive Neural Network (DaNN), a neural network based domain adaptation algorithm that minimizes the classification error and the domain discrepancy between the source and target data representations. The experiments show the competitiveness of DaNN against several state-of-the-art methods on a benchmark object dataset. This thesis develops the Multi-task Autoencoder (MTAE), a domain generalization algorithm based on autoencoders trained via multi-task learning. MTAE learns to transform the original image into its analogs in multiple related domains simultaneously. The results show that the MTAE’s representations provide better classification performance than some alternative autoencoder-based models as well as the current state-of-the-art domain generalization algorithms. This thesis proposes a fast kernel-based representation learning algorithm for both domain adaptation and domain generalization, Scatter Component Analysis (SCA). SCA finds a data representation that trades between maximizing the separability of classes, minimizing the mismatch between domains, and maximizing the separability of the whole data points. The results show that SCA performs much faster than some competitive algorithms, while providing state-of-the-art accuracy in both domain adaptation and domain generalization. Finally, this thesis presents the Deep Reconstruction-Classification Network (DRCN), a deep convolutional network for domain adaptation. DRCN learns to classify labeled source data and also to reconstruct unlabeled target data via a shared encoding representation. The results show that DRCN provides competitive or better performance than the prior state-of-the-art model on several cross-domain object datasets

    Classification algorithms on the cell processor

    Get PDF
    The rapid advancement in the capacity and reliability of data storage technology has allowed for the retention of virtually limitless quantity and detail of digital information. Massive information databases are becoming more and more widespread among governmental, educational, scientific, and commercial organizations. By segregating this data into carefully defined input (e.g.: images) and output (e.g.: classification labels) sets, a classification algorithm can be used develop an internal expert model of the data by employing a specialized training algorithm. A properly trained classifier is capable of predicting the output for future input data from the same input domain that it was trained on. Two popular classifiers are Neural Networks and Support Vector Machines. Both, as with most accurate classifiers, require massive computational resources to carry out the training step and can take months to complete when dealing with extremely large data sets. In most cases, utilizing larger training improves the final accuracy of the trained classifier. However, access to the kinds of computational resources required to do so is expensive and out of reach of private or under funded institutions. The Cell Broadband Engine (CBE), introduced by Sony, Toshiba, and IBM has recently been introduced into the market. The current most inexpensive iteration is available in the Sony Playstation 3 ® computer entertainment system. The CBE is a novel multi-core architecture which features many hardware enhancements designed to accelerate the processing of massive amounts of data. These characteristics and the cheap and widespread availability of this technology make the Cell a prime candidate for the task of training classifiers. In this work, the feasibility of the Cell processor in the use of training Neural Networks and Support Vector Machines was explored. In the Neural Network family of classifiers, the fully connected Multilayer Perceptron and Convolution Network were implemented. In the Support Vector Machine family, a Working Set technique known as the Gradient Projection-based Decomposition Technique, as well as the Cascade SVM were implemented
    • …
    corecore