3,599 research outputs found

    Double-Directional Information Azimuth Spectrum and Relay Network Tomography for a Decentralized Wireless Relay Network

    Full text link
    A novel channel representation for a two-hop decentralized wireless relay network (DWRN) is proposed, where the relays operate in a completely distributive fashion. The modeling paradigm applies an analogous approach to the description method for a double-directional multipath propagation channel, and takes into account the finite system spatial resolution and the extended relay listening/transmitting time. Specifically, the double-directional information azimuth spectrum (IAS) is formulated to provide a compact representation of information flows in a DWRN. The proposed channel representation is then analyzed from a geometrically-based statistical modeling perspective. Finally, we look into the problem of relay network tomography (RNT), which solves an inverse problem to infer the internal structure of a DWRN by using the instantaneous doubledirectional IAS recorded at multiple measuring nodes exterior to the relay region

    Function approximation via the subsampled Poincaré inequality

    Get PDF
    Function approximation and recovery via some sampled data have long been studied in a wide array of applied mathematics and statistics fields. Analytic tools, such as the Poincaré inequality, have been handy for estimating the approximation errors in different scales. The purpose of this paper is to study a generalized Poincaré inequality, where the measurement function is of subsampled type, with a small but non-zero lengthscale that will be made precise. Our analysis identifies this inequality as a basic tool for function recovery problems. We discuss and demonstrate the optimality of the inequality concerning the subsampled lengthscale, connecting it to existing results in the literature. In application to function approximation problems, the approximation accuracy using different basis functions and under different regularity assumptions is established by using the subsampled Poincaré inequality. We observe that the error bound blows up as the subsampled lengthscale approaches zero, due to the fact that the underlying function is not regular enough to have well-defined pointwise values. A weighted version of the Poincaré inequality is proposed to address this problem; its optimality is also discussed

    A Collective Variational Autoencoder for Top-NN Recommendation with Side Information

    Full text link
    Recommender systems have been studied extensively due to their practical use in many real-world scenarios. Despite this, generating effective recommendations with sparse user ratings remains a challenge. Side information associated with items has been widely utilized to address rating sparsity. Existing recommendation models that use side information are linear and, hence, have restricted expressiveness. Deep learning has been used to capture non-linearities by learning deep item representations from side information but as side information is high-dimensional existing deep models tend to have large input dimensionality, which dominates their overall size. This makes them difficult to train, especially with small numbers of inputs. Rather than learning item representations, which is problematic with high-dimensional side information, in this paper, we propose to learn feature representation through deep learning from side information. Learning feature representations, on the other hand, ensures a sufficient number of inputs to train a deep network. To achieve this, we propose to simultaneously recover user ratings and side information, by using a Variational Autoencoder (VAE). Specifically, user ratings and side information are encoded and decoded collectively through the same inference network and generation network. This is possible as both user ratings and side information are data associated with items. To account for the heterogeneity of user rating and side information, the final layer of the generation network follows different distributions depending on the type of information. The proposed model is easy to implement and efficient to optimize and is shown to outperform state-of-the-art top-NN recommendation methods that use side information.Comment: 7 pages, 3 figures, DLRS workshop 201

    Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization

    Full text link
    In this paper, we study a class of stochastic optimization problems, referred to as the \emph{Conditional Stochastic Optimization} (CSO), in the form of \min_{x \in \mathcal{X}} \EE_{\xi}f_\xi\Big({\EE_{\eta|\xi}[g_\eta(x,\xi)]}\Big), which finds a wide spectrum of applications including portfolio selection, reinforcement learning, robust learning, causal inference and so on. Assuming availability of samples from the distribution \PP(\xi) and samples from the conditional distribution \PP(\eta|\xi), we establish the sample complexity of the sample average approximation (SAA) for CSO, under a variety of structural assumptions, such as Lipschitz continuity, smoothness, and error bound conditions. We show that the total sample complexity improves from \cO(d/\eps^4) to \cO(d/\eps^3) when assuming smoothness of the outer function, and further to \cO(1/\eps^2) when the empirical function satisfies the quadratic growth condition. We also establish the sample complexity of a modified SAA, when ξ\xi and η\eta are independent. Several numerical experiments further support our theoretical findings. Keywords: stochastic optimization, sample average approximation, large deviations theoryComment: Typo corrected. Reference added. Revision comments handle
    corecore