298 research outputs found

    Automatic Raga Recognition in Hindustani Classical Music

    Full text link
    Raga is the central melodic concept in Hindustani Classical Music. It has a complex structure, often characterized by pathos. In this paper, we describe a technique for Automatic Raga Recognition, based on pitch distributions. We are able to successfully classify ragas with a commendable accuracy on our test dataset.Comment: Seminar on Computer Music, RWTH Aachen, http://hpac.rwth-aachen.de/teaching/sem-mus-17/Reports/Alekh.pd

    Human Aspects and Perception of Privacy in Relation to Personalization

    Full text link
    The concept of privacy is inherently intertwined with human attitudes and behaviours, as most computer systems are primarily designed for human use. Especially in the case of Recommender Systems, which feed on information provided by individuals, their efficacy critically depends on whether or not information is externalized, and if it is, how much of this information contributes positively to their performance and accuracy. In this paper, we discuss the impact of several factors on users' information disclosure behaviours and privacy-related attitudes, and how users of recommender systems can be nudged into making better privacy decisions for themselves. Apart from that, we also address the problem of privacy adaptation, i.e. effectively tailoring Recommender Systems by gaining a deeper understanding of people's cognitive decision-making process.Comment: Seminar on Privacy and Big Data, Summer Semester 2017, Informatik 5, RWTH Aachen University, German

    A Lower Bound for the Optimization of Finite Sums

    Full text link
    This paper presents a lower bound for optimizing a finite sum of nn functions, where each function is LL-smooth and the sum is μ\mu-strongly convex. We show that no algorithm can reach an error ϵ\epsilon in minimizing all functions from this class in fewer than Ω(n+n(κ1)log(1/ϵ))\Omega(n + \sqrt{n(\kappa-1)}\log(1/\epsilon)) iterations, where κ=L/μ\kappa=L/\mu is a surrogate condition number. We then compare this lower bound to upper bounds for recently developed methods specializing to this setting. When the functions involved in this sum are not arbitrary, but based on i.i.d. random data, then we further contrast these complexity results with those for optimal first-order methods to directly optimize the sum. The conclusion we draw is that a lot of caution is necessary for an accurate comparison, and identify machine learning scenarios where the new methods help computationally.Comment: Added an erratum, we are currently working on extending the result to randomized algorithm

    Distributed Delayed Stochastic Optimization

    Full text link
    We analyze the convergence of gradient-based optimization algorithms that base their updates on delayed stochastic gradient information. The main application of our results is to the development of gradient-based distributed optimization algorithms where a master node performs parameter updates while worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to asynchrony. We take motivation from statistical problems where the size of the data is so large that it cannot fit on one computer; with the advent of huge datasets in biology, astronomy, and the internet, such problems are now common. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible and we can achieve order-optimal convergence results. In application to distributed optimization, we develop procedures that overcome communication bottlenecks and synchronization requirements. We show nn-node architectures whose optimization error in stochastic problems---in spite of asynchronous delays---scales asymptotically as \order(1 / \sqrt{nT}) after TT iterations. This rate is known to be optimal for a distributed system with nn nodes even in the absence of delays. We additionally complement our theoretical results with numerical experiments on a statistical machine learning task.Comment: 27 pages, 4 figure

    Optimal Allocation Strategies for the Dark Pool Problem

    Get PDF
    We study the problem of allocating stocks to dark pools. We propose and analyze an optimal approach for allocations, if continuous-valued allocations are allowed. We also propose a modification for the case when only integer-valued allocations are possible. We extend the previous work on this problem to adversarial scenarios, while also improving on their results in the iid setup. The resulting algorithms are efficient, and perform well in simulations under stochastic and adversarial inputs
    corecore