21,316 research outputs found

    Generating Aspect-oriented Multi-document Summarization with Event-Aspect Model

    Get PDF
    In this paper, we propose a novel approach to automatic generation of aspect-oriented summaries from multiple documents. We first develop an event-aspect LDA model to cluster sentences into aspects. We then use extended LexRank algorithm to rank the sentences in each cluster. We use Integer Linear Programming for sentence selection. Key features of our method include automatic grouping of semantically related sentences and sentence ranking based on extension of random walk model. Also, we implement a new sentence compression algorithm which use dependency tree instead of parser tree. We compare our method with four baseline methods. Quantitative evaluation based on Rouge metric demonstrates the effectiveness and advantages of our method.

    Expediting TTS Synthesis with Adversarial Vocoding

    Get PDF
    Recent approaches in text-to-speech (TTS) synthesis employ neural network strategies to vocode perceptually-informed spectrogram representations directly into listenable waveforms. Such vocoding procedures create a computational bottleneck in modern TTS pipelines. We propose an alternative approach which utilizes generative adversarial networks (GANs) to learn mappings from perceptually-informed spectrograms to simple magnitude spectrograms which can be heuristically vocoded. Through a user study, we show that our approach significantly outperforms na\"ive vocoding strategies while being hundreds of times faster than neural network vocoders used in state-of-the-art TTS systems. We also show that our method can be used to achieve state-of-the-art results in unsupervised synthesis of individual words of speech.Comment: Published as a conference paper at INTERSPEECH 201

    Bridging the Gap between Enumerative and Symbolic Model Checkers

    Get PDF
    We present a method to perform symbolic state space generation for languages with existing enumerative state generators. The method is largely independent from the chosen modelling language. We validated this on three different types of languages and tools: state-based languages (PROMELA), action-based process algebras (muCRL, mCRL2), and discrete abstractions of ODEs (Maple).\ud Only little information about the combinatorial structure of the\ud underlying model checking problem need to be provided. The key enabling data structure is the "PINS" dependency matrix. Moreover, it can be provided gradually (more precise information yield better results).\ud \ud Second, in addition to symbolic reachability, the same PINS matrix contains enough information to enable new optimizations in state space generation (transition caching), again independent from the chosen modelling language. We have also based existing optimizations, like (recursive) state collapsing, on top of PINS and hint at how to support partial order reduction techniques.\ud \ud Third, PINS allows interfacing of existing state generators to, e.g., distributed reachability tools. Thus, besides the stated novelties, the method we propose also significantly reduces the complexity of building modular yet still efficient model checking tools.\ud \ud Our experiments show that we can match or even outperform existing tools by reusing their own state generators, which we have linked into an implementation of our ideas

    Faster subsequence recognition in compressed strings

    Full text link
    Computation on compressed strings is one of the key approaches to processing massive data sets. We consider local subsequence recognition problems on strings compressed by straight-line programs (SLP), which is closely related to Lempel--Ziv compression. For an SLP-compressed text of length mĖ‰\bar m, and an uncompressed pattern of length nn, C{\'e}gielski et al. gave an algorithm for local subsequence recognition running in time O(mĖ‰n2logā”n)O(\bar mn^2 \log n). We improve the running time to O(mĖ‰n1.5)O(\bar mn^{1.5}). Our algorithm can also be used to compute the longest common subsequence between a compressed text and an uncompressed pattern in time O(mĖ‰n1.5)O(\bar mn^{1.5}); the same problem with a compressed pattern is known to be NP-hard
    • ā€¦
    corecore