22 research outputs found

    Efficient Learning Machines

    Get PDF
    Computer scienc

    Advances in Data Mining Knowledge Discovery and Applications

    Get PDF
    Advances in Data Mining Knowledge Discovery and Applications aims to help data miners, researchers, scholars, and PhD students who wish to apply data mining techniques. The primary contribution of this book is highlighting frontier fields and implementations of the knowledge discovery and data mining. It seems to be same things are repeated again. But in general, same approach and techniques may help us in different fields and expertise areas. This book presents knowledge discovery and data mining applications in two different sections. As known that, data mining covers areas of statistics, machine learning, data management and databases, pattern recognition, artificial intelligence, and other areas. In this book, most of the areas are covered with different data mining applications. The eighteen chapters have been classified in two parts: Knowledge Discovery and Data Mining Applications

    Deep Learning Designs for Physical Layer Communications

    Get PDF
    Wireless communication systems and their underlying technologies have undergone unprecedented advances over the last two decades to assuage the ever-increasing demands for various applications and emerging technologies. However, the traditional signal processing schemes and algorithms for wireless communications cannot handle the upsurging complexity associated with fifth-generation (5G) and beyond communication systems due to network expansion, new emerging technologies, high data rate, and the ever-increasing demands for low latency. This thesis extends the traditional downlink transmission schemes to deep learning-based precoding and detection techniques that are hardware-efficient and of lower complexity than the current state-of-the-art. The thesis focuses on: precoding/beamforming in massive multiple-inputs-multiple-outputs (MIMO), signal detection and lightweight neural network (NN) architectures for precoder and decoder designs. We introduce a learning-based precoder design via constructive interference (CI) that performs the precoding on a symbol-by-symbol basis. Instead of conventionally training a NN without considering the specifics of the optimisation objective, we unfold a power minimisation symbol level precoding (SLP) formulation based on the interior-point-method (IPM) proximal ‘log’ barrier function. Furthermore, we propose a concept of NN compression, where the weights are quantised to lower numerical precision formats based on binary and ternary quantisations. We further introduce a stochastic quantisation technique, where parts of the NN weight matrix are quantised while the remaining is not. Finally, we propose a systematic complexity scaling of deep neural network (DNN) based MIMO detectors. The model uses a fraction of the DNN inputs by scaling their values through weights that follow monotonically non-increasing functions. Furthermore, we investigate performance complexity tradeoffs via regularisation constraints on the layer weights such that, at inference, parts of network layers can be removed with minimal impact on the detection accuracy. Simulation results show that our proposed learning-based techniques offer better complexity-vs-BER (bit-error-rate) and complexity-vs-transmit power performances compared to the state-of-the-art MIMO detection and precoding techniques

    Sensors and Systems for Indoor Positioning

    Get PDF
    This reprint is a reprint of the articles that appeared in Sensors' (MDPI) Special Issue on “Sensors and Systems for Indoor Positioning". The published original contributions focused on systems and technologies to enable indoor applications

    Vol. 13, No. 1 (Full Issue)

    Get PDF

    Automatic aspect extraction in information retrieval diversity

    Full text link
    In this master thesis we describe a new automatic aspect extraction algorithm by incorporating relevance information to the dynamics of the Probabilistic Latent Semantic Analysis. An utility-biased likelihood statistical framework is described to formalize the incorporation of prior relevance information to the dynamics of the algorithm intrinsically. Moreover, a general abstract algorithm is presented to incorporate any arbitrary new feature variables to the analysis. A tempering procedure is inferred for this general algorithm as an entropic regularization of the utility-biased likelihood functional and a geometric interpretation of the algorithm is described, showing the intrinsic changes in the information space of the problem produced when di erent sources of prior utility estimations are provided over the same data. The general algorithm is applied to several information retrieval, recommendation and personalization tasks. Moreover, a set of post-processing aspect lters is presented. Some characteristics of the aspect distributions such as sparsity or low entropy are identi ed to enhance the overall diversity attained by the diversi cation algorithm. Proposed lters assure that the nal aspect space has those properties, thus leading to better diversity levels. An experimental setup over TREC web track 09-12 data shows that the algorithm surpasses classic pLSA as an aspect extraction tool for the search diversi cation. Additional theoretical applications of the general procedure to information retrieval, recommendation and personalization tasks are given, leading to new relevanceaware models incorporating several variables to the latent semantic analysis. Finally the problem of optimizing the aspect space size for diversi cation is addressed. Analytical formulas for the dependency of diversity metrics on the choice of an automatically extracted aspect space are given under a simpli ed generative model for the relation between system aspects and evaluation true aspects. An experimental analysis of this dependence is performed over TREC web track data using pLSA as aspect extraction algorithm

    Symmetric and Asymmetric Data in Solution Models

    Get PDF
    This book is a Printed Edition of the Special Issue that covers research on symmetric and asymmetric data that occur in real-life problems. We invited authors to submit their theoretical or experimental research to present engineering and economic problem solution models that deal with symmetry or asymmetry of different data types. The Special Issue gained interest in the research community and received many submissions. After rigorous scientific evaluation by editors and reviewers, seventeen papers were accepted and published. The authors proposed different solution models, mainly covering uncertain data in multicriteria decision-making (MCDM) problems as complex tools to balance the symmetry between goals, risks, and constraints to cope with the complicated problems in engineering or management. Therefore, we invite researchers interested in the topics to read the papers provided in the book

    Engineering of reconfigurable integrated photonics for quantum computation protocols

    Get PDF
    Over the last decade, integrated optics has emerged as one of the main technologies for quantum optics and more generally quantum computation, quantum cryptography and communication. In particular, it is fundamental for the construction of reconfigurable interferometers with a high number of optical modes. In this thesis we present, on the one hand, the development of a new geometry for the creation of integrated reconfigurable devices with a high number of modes and, on the other hand, the development of quantum computation protocols to be realized in integrated photonic chips. In the first part, two algorithms are proposed for the characterization of integrated circuits in terms of implemented unitary matrix. The first uses a so-called Black Box approach, i.e. one that makes no assumptions about the internal structure of the device under consideration, and it is based on second-order correlation measurements with coherent light. The second is specific to a planar rectangular geometry, first proposed by Clements et al., which has a variety of applications in the literature and is also employed in this thesis. Subsequently, we present the realization of a new 32-mode reconfigurable integrated photonic device with a continuously coupled three-dimensional geometry. Its potential in terms of reconfigurability is tested and a Boson sampling experiment with three and four photons is carried out to show its potential in the field of quantum computation. In the second part, we propose the application of integrated photonic devices to two quantum computation protocols. The first was recently proposed and is the quantum extension of a problem called Bernoulli factory. It consists in the construction of a qubit from nn qubits in the same unknown state so that there is a predetermined exact relation between the output and input states. In the thesis, we theoretically analyze the computational complexity of the problem in terms of the qubits used and the success probability of the problem. Furthermore, a photonic implementation is proposed and experimentally tested for correctness and resilience to experimental noise. The second application consists of the experimental implementation of a quantum metrology protocol in which three distinct phases are estimated simultaneously, showing that the use of indistinguishable photons leads to an advantage in terms of the variance of the estimates

    Pricing financial and insurance products in the multivariate setting

    Get PDF
    In finance and insurance there is often the need to construct multivariate distributions to take into account more than one source of risk, where such risks cannot be assumed to be independent. In the course of this thesis we are going to explore three models, namely the copula models, the trivariate reduction scheme and mixtures as candidate models for capturing the dependence between multiple sources of risk. This thesis contains results of three different projects. The first one is in financial mathematics, more precisely on the pricing of financial derivatives (multi-asset options) which depend on multiple underlying assets, where we construct the dependence between such assets using copula models and the trivariate reduction scheme. The second and the third projects are in actuarial mathematics, more specifically on the pricing of the premia that need to be paid by policyholders in the automobile insurance when more than one type of claim is considered. We do the pricing including all the information available about the characteristics of the policyholders and their cars (i.e. a priori ratemaking) and about the numbers of claims per type in which the policyholders have been involved (i.e. a posteriori ratemaking). In both projects we model the dependence between the multiple types of claims using mixture distributions/regression models: we consider the different types of claims to be modelled in terms of their own distribution/regression model but with a common heterogeneity factor which follows a mixing distribution/regression model that is responsible for the dependence between the multiple types of claims. In the second project we present a new model (i.e. the bivariate Negative Binomial-Inverse Gaussian regression model) and in the third one we present a new family of models (i.e. the bivariate mixed Poisson regression models with varying dispersion), both as suitable alternatives to the classically used bivariate mixed Poisson regression models
    corecore