684 research outputs found

    Mental state estimation for brain-computer interfaces

    Get PDF
    Mental state estimation is potentially useful for the development of asynchronous brain-computer interfaces. In this study, four mental states have been identified and decoded from the electrocorticograms (ECoGs) of six epileptic patients, engaged in a memory reach task. A novel signal analysis technique has been applied to high-dimensional, statistically sparse ECoGs recorded by a large number of electrodes. The strength of the proposed technique lies in its ability to jointly extract spatial and temporal patterns, responsible for encoding mental state differences. As such, the technique offers a systematic way of analyzing the spatiotemporal aspects of brain information processing and may be applicable to a wide range of spatiotemporal neurophysiological signals

    Optimal time sharing in underlay cognitive radio systems with RF energy harvesting

    Full text link
    Due to the fundamental tradeoffs, achieving spectrum efficiency and energy efficiency are two contending design challenges for the future wireless networks. However, applying radio-frequency (RF) energy harvesting (EH) in a cognitive radio system could potentially circumvent this tradeoff, resulting in a secondary system with limitless power supply and meaningful achievable information rates. This paper proposes an online solution for the optimal time allocation (time sharing) between the EH phase and the information transmission (IT) phase in an underlay cognitive radio system, which harvests the RF energy originating from the primary system. The proposed online solution maximizes the average achievable rate of the cognitive radio system, subject to the ε\varepsilon-percentile protection criteria for the primary system. The optimal time sharing achieves significant gains compared to equal time allocation between the EH and IT phases.Comment: Proceedings of the 2015 IEEE International Conference on Communications (IEEE ICC 2015), 8-12 June 2015, London, U

    Shape and Illumination from Shading Using the Generic Viewpoint Assumption

    Get PDF
    The Generic Viewpoint Assumption (GVA) states that the position of the viewer or the light in a scene is not special. Thus, any estimated parameters from an observation should be stable under small perturbations such as object, viewpoint or light positions. The GVA has been analyzed and quantified in previous works, but has not been put to practical use in actual vision tasks. In this paper, we show how to utilize the GVA to estimate shape and illumination from a single shading image, without the use of other priors. We propose a novel linearized Spherical Harmonics (SH) shading model which enables us to obtain a computationally efficient form of the GVA term. Together with a data term, we build a model whose unknowns are shape and SH illumination. The model parameters are estimated using the Alternating Direction Method of Multipliers embedded in a multi-scale estimation framework. In this prior-free framework, we obtain competitive shape and illumination estimation results under a variety of models and lighting conditions, requiring fewer assumptions than competing methods.National Science Foundation (U.S.). Directorate for Computer and Information Science and Engineering/Division of Information & Intelligent Systems (Award 1212928)Qatar Computing Research Institut

    Learning Mixtures of Gaussians in High Dimensions

    Full text link
    Efficiently learning mixture of Gaussians is a fundamental problem in statistics and learning theory. Given samples coming from a random one out of k Gaussian distributions in Rn, the learning problem asks to estimate the means and the covariance matrices of these Gaussians. This learning problem arises in many areas ranging from the natural sciences to the social sciences, and has also found many machine learning applications. Unfortunately, learning mixture of Gaussians is an information theoretically hard problem: in order to learn the parameters up to a reasonable accuracy, the number of samples required is exponential in the number of Gaussian components in the worst case. In this work, we show that provided we are in high enough dimensions, the class of Gaussian mixtures is learnable in its most general form under a smoothed analysis framework, where the parameters are randomly perturbed from an adversarial starting point. In particular, given samples from a mixture of Gaussians with randomly perturbed parameters, when n > {\Omega}(k^2), we give an algorithm that learns the parameters with polynomial running time and using polynomial number of samples. The central algorithmic ideas consist of new ways to decompose the moment tensor of the Gaussian mixture by exploiting its structural properties. The symmetries of this tensor are derived from the combinatorial structure of higher order moments of Gaussian distributions (sometimes referred to as Isserlis' theorem or Wick's theorem). We also develop new tools for bounding smallest singular values of structured random matrices, which could be useful in other smoothed analysis settings
    corecore