36,319 research outputs found

    Serve or Skip: The Power of Rejection in Online Bottleneck Matching

    Get PDF
    We consider the online matching problem, where n server-vertices lie in a metric space and n request-vertices that arrive over time each must immediately be permanently assigned to a server-vertex.We focus on the egalitarian bottleneck objective, where the goal is to minimize the maximum distance between any request and its server. It has been demonstrated that while there are effective algorithms for the utilitarian objective (minimizing total cost) in the resource augmentation setting where the offline adversary has half the resources, these are not effective for the egalitarian objective. Thus, we propose a new Serve-or-Skip bicriteria analysis model, where the online algorithm may reject or skip up to a specified number of requests, and propose two greedy algorithms: GRI NN(t) and GRIN(t) . We show that the Serve-or-Skip model of resource augmentation analysis can essentially simulate the doubled-server capacity model, and then examine the performance of GRI NN(t) and GRIN(t)

    Approximating the multi-level bottleneck assignment problem.

    Get PDF
    We consider the multi-level bottleneck assignment problem (MBA). This problem is described in the recent book 'Assignment Problems' by Burkard et al. (2009) on pages 188-189. One of the applications described there concerns bus driver scheduling.We view the problem as a special case of a bottleneck m-dimensional multi-index assignment problem. We give approximation algorithms and inapproximability results, depending upon the completeness of the underlying graph. Keywords: bottleneck problem; multidimensional assignment; approximation; computational complexity; efficient algorithm.Bottleneck problem; Multidimensional assignment; Approximation; Computational complexity; Efficient algorithm;

    Sampling-based speech parameter generation using moment-matching networks

    Full text link
    This paper presents sampling-based speech parameter generation using moment-matching networks for Deep Neural Network (DNN)-based speech synthesis. Although people never produce exactly the same speech even if we try to express the same linguistic and para-linguistic information, typical statistical speech synthesis produces completely the same speech, i.e., there is no inter-utterance variation in synthetic speech. To give synthetic speech natural inter-utterance variation, this paper builds DNN acoustic models that make it possible to randomly sample speech parameters. The DNNs are trained so that they make the moments of generated speech parameters close to those of natural speech parameters. Since the variation of speech parameters is compressed into a low-dimensional simple prior noise vector, our algorithm has lower computation cost than direct sampling of speech parameters. As the first step towards generating synthetic speech that has natural inter-utterance variation, this paper investigates whether or not the proposed sampling-based generation deteriorates synthetic speech quality. In evaluation, we compare speech quality of conventional maximum likelihood-based generation and proposed sampling-based generation. The result demonstrates the proposed generation causes no degradation in speech quality.Comment: Submitted to INTERSPEECH 201
    • …
    corecore