23 research outputs found
A Comparative Analysis of the Socio-Demographic Development of the Cities in Siberia and Amazonia in the second half of the 20th century
The article examines the prospects of using a comparative analysis in the study of the socio-demographic processes in the Northeastern regions of Brazil (Amazonia) and Siberia. The authors hypothesize that one of the optimal cases for comparison with Siberia is Amazonia. A comparative analysis suggests that the compared cases (a set of objects, phenomena and processes) should have a number of similar and distinctive features, while at the same time having tangible differences. Despite the seeming heterogeneity, Amazonia and Siberia have many similarities. Among the elements of similarity, the frontier position stands out in the first place. Both Siberia and Amazonia were the regions of European colonization. Throughout the 20th century, Siberia continued to be a relatively undeveloped territory, in fact, still being a frontier. Brazil's Amazon plays a similar role. Both regions have a similar economic profile and act as resource regions that are rich in timber, metals and other minerals as well as having similar energy profiles. The article highlights that the development in both regions peaked in the second half of the 20th century, and also discusses similar phenomena in the demographic processes in these regions. The findings have allowed the authors to formulate the main directions for further comparative analysis of the socio-demographic development of the large urban centers in Siberia and Amazonia
State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Attention-based encoder-decoder architectures such as Listen, Attend, and
Spell (LAS), subsume the acoustic, pronunciation and language model components
of a traditional automatic speech recognition (ASR) system into a single neural
network. In previous work, we have shown that such architectures are comparable
to state-of-theart ASR systems on dictation tasks, but it was not clear if such
architectures would be practical for more challenging tasks such as voice
search. In this work, we explore a variety of structural and optimization
improvements to our LAS model which significantly improve performance. On the
structural side, we show that word piece models can be used instead of
graphemes. We also introduce a multi-head attention architecture, which offers
improvements over the commonly-used single-head attention. On the optimization
side, we explore synchronous training, scheduled sampling, label smoothing, and
minimum word error rate optimization, which are all shown to improve accuracy.
We present results with a unidirectional LSTM encoder for streaming
recognition. On a 12, 500 hour voice search task, we find that the proposed
changes improve the WER from 9.2% to 5.6%, while the best conventional system
achieves 6.7%; on a dictation task our model achieves a WER of 4.1% compared to
5% for the conventional system.Comment: ICASSP camera-ready versio
Recommended from our members
A Framework for Productive, Efficient and Portable Parallel Computing
Developing efficient parallel implementations and fully utilizing the available resources of parallel platforms is now required for software applications to scale to new generations of processors. Yet, parallel programming remains challeng- ing to programmers due to the requisite low-level knowledge of the underlying hardware and parallel computing constructs. Developing applications that effec- tively utilize parallel hardware is restricted by poor programmer productivity, low-level implementation requirements, and limited portability of the application code. These restrictions in turn impede experimentation with various algorithmic approaches for a given application. Currently, the programming world is divided into two types of programmers: application writers who focus on designing and prototyping applications and algorithms, and efficiency programmers who focus on extracting performance for particular compute kernels. The gap between these two types of programmers is referred to as "the implementation gap".In this dissertation, we present a software environment that aims to bridge the implementation gap and enable application writers to productively utilize parallel hardware by reusing the work of efficiency programmers. Specifically, we present PyCASP, a Python-based software framework that automatically maps Python application code to a variety of parallel platforms. PyCASP is an application- domain-specific framework that uses a systematic, pattern-oriented approach to offer a single productive software development environment for application writ- ers. PyCASP targets audio content analysis applications, but our methodology is designed to be applicable to any application domain. Using PyCASP, appli- cations can be prototyped in Python code and our environment enables them to automatically scale their performance to modern parallel processors such as GPUs, multicore CPUs and compute clusters. We use the Selective Embedded JIT Specialization (SEJITS) mechanism to realize the pattern-based design of PyCASP in software. We use SEJITS to implement PyCASP's components and to enable automatic parallelization of specific audio content analysis application patterns on a variety of parallel hardware. By focusing on one application domain, we enable efficient composition of computations using three structural patterns: MapReduce, Iterator and Pipe-and-Filter.To illustrate our approach, we study a set of four example audio content anal- ysis applications that are architected and implemented using PyCASP: a speaker verification system, a speaker diarization system, a music recommendation sys- tem and a video event detection system. We describe the detailed implementa- tion of two computational components of PyCASP: a Gaussian Mixture Model (GMM) component and a Support Vector Machine (SVM) component and their use in implementing the example applications. We also analyze composition of computations using the three structural patterns and implement the available optimizations for composing computations in audio analysis applications.We evaluate our approach with results on productivity and performance using the two computational components and the four example applications. Our re- sults illustrate that we can prototype the full-functioning applications in Python using 10 − 60× less lines of code than equivalent implementations using low-level languages. Our PyCASP components and example applications achieve and of- ten exceed the efficiency of comparable hand-tuned low-level implementations. In addition to specialization, adding the optimizations for composing components in these applications can give up to 30% performance improvement. We show that applications written using PyCASP can be run on multiple parallel hard- ware backends with little or no application code change. PyCASP also enables applications to scale from one desktop GPU to a cluster of GPUs with little pro- grammer effort. Combining all of the specialization and composition techniques, our example applications are able to automatically achieve 50-1000× faster-than- real-time performance on both multi-core CPU and GPU platforms and 15.5× speedup on 16-node cluster of GPUs showing near-optimal scaling
Fast Speaker Diarization Using a High-Level Scripting Language
Abstract—Current state-of-the-art speaker diarization systems use agglomerative clustering of Gaussian Mixture Models (GMMs) to determine the number of speakers in an audio recording. GMM training is a central computation in the agglomerative clustering approach, which presents computational challenges that limit performance and make real-time processing of audio very difficult. With the emergence of highly parallel multicore and manycore processors such as Graphics Processing Units (GPUs), we can re-implement GMM training for these processors to achieve faster than real-time performance by taking advantage of parallelism in the training computation. However, developing and maintaining the low-level GPU code is difficult and requires deep understanding of hardware architecture of the parallel processor. In this paper we present a speaker diarization application captured in under 50 lines of Python that achieves 50-200 × faster than real-time performance by automatically executing computationally intensive GMM training on an NVIDIA GPU with no significant loss in accuracy. I