134 research outputs found
WideใปDeepใขใใซใ็จใใๆฉๆขฐๅญฆ็ฟใ้ซ้ๅใใใใใฎใขใซใดใชใบใ
ไบฌ้ฝๅคงๅญฆๆฐๅถใป่ชฒ็จๅๅฃซๅๅฃซ(ๆ
ๅ ฑๅญฆ)็ฒ็ฌฌ23310ๅทๆ
ๅ็ฌฌ746ๅทๆฐๅถ||ๆ
||127(้ๅฑๅณๆธ้คจ)ไบฌ้ฝๅคงๅญฆๅคงๅญฆ้ขๆ
ๅ ฑๅญฆ็ ็ฉถ็ง็ฅ่ฝๆ
ๅ ฑๅญฆๅฐๆป(ไธปๆป)ๆๆ ้นฟๅณถ ไน
ๅฃ, ๆๆ ็ฐไธญ ๅฉๅนธ, ๆๆ ๅฑฑไธ ไฟก้ๅญฆไฝ่ฆๅ็ฌฌ4ๆก็ฌฌ1้
่ฉฒๅฝDoctor of InformaticsKyoto UniversityDFA
Secure Shapley Value for Cross-Silo Federated Learning
The Shapley value (SV) is a fair and principled metric for contribution
evaluation in cross-silo federated learning (cross-silo FL), wherein
organizations, i.e., clients, collaboratively train prediction models with the
coordination of a parameter server. However, existing SV calculation methods
for FL assume that the server can access the raw FL models and public test
data. This may not be a valid assumption in practice considering the emerging
privacy attacks on FL models and the fact that test data might be clients'
private assets. Hence, we investigate the problem of secure SV calculation for
cross-silo FL. We first propose HESV, a one-server solution based solely on
homomorphic encryption (HE) for privacy protection, which has limitations in
efficiency. To overcome these limitations, we propose SecSV, an efficient
two-server protocol with the following novel features. First, SecSV utilizes a
hybrid privacy protection scheme to avoid ciphertext--ciphertext
multiplications between test data and models, which are extremely expensive
under HE. Second, an efficient secure matrix multiplication method is proposed
for SecSV. Third, SecSV strategically identifies and skips some test samples
without significantly affecting the evaluation accuracy. Our experiments
demonstrate that SecSV is 7.2-36.6 times as fast as HESV, with a limited loss
in the accuracy of calculated SVs.Comment: Extened report for our VLDB 2023 pape
End-to-End Neural Network-based Speech Recognition for Mobile and Embedded Devices
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ ๋ณด๊ณตํ๋ถ, 2020. 8. ์ฑ์์ฉ.Real-time automatic speech recognition (ASR) on mobile and embedded devices has been of great interest in recent years. Deep neural network-based automatic speech recognition demands a large number of computations, while the memory bandwidth and power storage of mobile devices are limited. The server-based implementation is often employed, but this increases latency or privacy concerns. Therefore, the need of the on-device ASR system is increasing. Recurrent neural networks (RNNs) are often used for the ASR model. The RNN implementation on embedded devices can suffer from excessive DRAM accesses, because the parameter size of a neural network usually exceeds that of the cache memory. Also, the parameters of RNN cannot be reused for multiple time-steps due to its feedback structure. To solve this problem, multi-time step parallelizable models are applied for speech recognition. The multi-time step parallelization approach computes multiple output samples at a time with the parameters fetched from the DRAM. Since the number of DRAM accesses can be reduced in proportion to the number of parallelization steps, a high processing speed can be achieved for the parallelizable model.
In this thesis, a connectionist temporal classification (CTC) model is constructed by combining simple recurrent units (SRUs) and depth-wise 1-dimensional convolution layers for multi-time step parallelization. Both the character and word piece models are developed for the CTC model, and the corresponding RNN based language models are used for beam search decoding. A competitive WER for WSJ corpus is achieved using the entire model size of
approximately 15MB. The system operates in real-time speed using only a single core ARM without GPU or special hardware.
A low-latency on-device speech recognition system with a simple gated convolutional network (SGCN) is also proposed. The SGCN shows a competitive recognition accuracy even with 1M parameters. 8-bit quantization is applied to reduce the memory size and computation time. The proposed system features an online recognition with a 0.4s latency limit and operates in 0.2 RTF with only a single 900MHz CPU core.
In addition, an attention-based model with the depthwise convolutional encoder is proposed. Convolutional encoders enable faster training and inference of attention models than recurrent neural network-based ones. However, convolutional models often require a very large receptive field to achieve high recognition accuracy, which not only increases the parameter size but also the computational cost and run-time memory footprint. A convolutional encoder with a short receptive field length often suffers from looping or skipping problems. We believe that this is due to the time-invariance of convolutions. We attempt to remedy this issue by adding positional information to the convolution-based encoder. It is shown that the word error rate (WER) of a convolutional encoder with a short receptive field size can be reduced significantly by augmenting it with positional information. Visualization results are presented to demonstrate the effectiveness of incorporating positional information. The streaming end-to-end ASR model is also developed by applying monotonic chunkwise attention.์ต๊ทผ ๋ชจ๋ฐ์ผ ๋ฐ ์๋ฒ ๋๋ ๊ธฐ๊ธฐ์์ ์ค์๊ฐ ๋์ํ๋ ์์ฑ ์ธ์ ์์คํ
์ ๊ฐ๋ฐํ๋ ๊ฒ์ด ํฐ ๊ด์ฌ์ ๋ฐ๊ณ ์๋ค. ๊น์ ์ธ๊ณต ์ ๊ฒฝ๋ง ์์ฑ์ธ์์ ๋ง์ ์์ ์ฐ์ฐ์ ํ์๋ก ํ๋ ๋ฐ๋ฉด, ๋ชจ๋ฐ์ผ ๊ธฐ๊ธฐ์ ๋ฉ๋ชจ๋ฆฌ ๋์ญํญ์ด๋ ์ ๋ ฅ์ ์ ํ๋์ด ์๋ค. ์ด๋ฌํ ํ๊ณ ๋๋ฌธ์ ์๋ฒ ๊ธฐ๋ฐ ๊ตฌํ์ด ๋ณดํต ์ฌ์ฉ๋์ด์ง์ง๋ง, ์ด๋ ์ง์ฐ ์๊ฐ ๋ฐ ์ฌ์ํ ์นจํด ๋ฌธ์ ๋ฅผ ์ผ์ผํจ๋ค. ๋ฐ๋ผ์ ๋ชจ๋ฐ์ผ ๊ธฐ๊ธฐ ์ ๋์ํ๋ ์์ฑ ์ธ์ ์์คํ
์ ์๊ตฌ๊ฐ ์ปค์ง๊ณ ์๋ค. ์์ฑ ์ธ์ ์์คํ
์ ์ฃผ๋ก ์ฌ์ฉ๋๋ ๋ชจ๋ธ์ ์ฌ๊ทํ ์ธ๊ณต ์ ๊ฒฝ๋ง์ด๋ค. ์ฌ๊ทํ ์ธ๊ณต ์ ๊ฒฝ๋ง์ ๋ชจ๋ธ ํฌ๊ธฐ๋ ๋ณดํต ์บ์์ ํฌ๊ธฐ๋ณด๋ค ํฌ๊ณ ํผ๋๋ฐฑ ๊ตฌ์กฐ ๋๋ฌธ์ ์ฌ์ฌ์ฉ์ด ์ด๋ ต๊ธฐ ๋๋ฌธ์ ๋ง์ DRAM ์ ๊ทผ์ ํ์๋ก ํ๋ค. ์ด๋ฌํ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๋ค์ค ์๊ฐ์ ์
๋ ฅ์๋ํด ๋ณ๋ ฌํ ๊ฐ๋ฅํ ๋ชจ๋ธ์ ์ด์ฉํ ์์ฑ ์ธ์ ์์คํ
์ ์ ์ํ๋ค. ๋ค์ค ์๊ฐ ๋ณ๋ ฌํ ๊ธฐ๋ฒ์ ํ ๋ฒ์ ๋ฉ๋ชจ๋ฆฌ ์ ๊ทผ์ผ๋ก ์ฌ๋ฌ ์๊ฐ์ ์ถ๋ ฅ์ ๋์์ ๊ณ์ฐํ๋ ๋ฐฉ๋ฒ์ด๋ค. ๋ณ๋ ฌํ ์์ ๋ฐ๋ผ DRAM ์ ๊ทผ ํ์๋ฅผ ์ค์ผ ์ ์๊ธฐ ๋๋ฌธ์, ๋ณ๋ ฌํ ๊ฐ๋ฅํ ๋ชจ๋ธ์ ๋ํ์ฌ ๋น ๋ฅธ ์ฐ์ฐ์ด ๊ฐ๋ฅํ๋ค.
๋จ์ ์ฌ๊ท ์ ๋๊ณผ 1์ฐจ์ ์ปจ๋ฒ๋ฃจ์
์ ์ด์ฉํ CTC ๋ชจ๋ธ์ ์ ์ํ์๋ค. ๋ฌธ์์ ๋จ์ด ์กฐ๊ฐ ์์ค์ ๋ชจ๋ธ์ด ๊ฐ๋ฐ๋์๋ค. ๊ฐ ์ถ๋ ฅ ๋จ์์ ํด๋นํ๋ ์ฌ๊ทํ ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ ์ธ์ด ๋ชจ๋ธ์ ์ด์ฉํ์ฌ ๋์ฝ๋ฉ์ ์ฌ์ฉ๋์๋ค. ์ ์ฒด 15MB์ ๋ฉ๋ชจ๋ฆฌ ํฌ๊ธฐ๋ก WSJ ์์ ๋์ ์์ค์ ์ธ์ ์ฑ๋ฅ์ ์ป์์ผ๋ฉฐ GPU๋ ๊ธฐํ ํ๋์จ์ด ์์ด 1๊ฐ์ ARM CPU ์ฝ์ด๋ก ์ค์๊ฐ ์ฒ๋ฆฌ๋ฅผ ๋ฌ์ฑํ์๋ค.
๋ํ ๋จ์ ์ปจ๋ฒ๋ฃจ์
์ธ๊ณต ์ ๊ฒฝ๋ง (SGCN)์ ์ด์ฉํ ๋ฎ์ ์ง์ฐ์๊ฐ์ ๊ฐ์ง๋ ์์ฑ์ธ์ ์์คํ
์ ๊ฐ๋ฐํ์๋ค. SGCN์ 1M์ ๋งค์ฐ ๋ฎ์ ๋ณ์ ๊ฐฏ์๋ก๋ ๊ฒฝ์๋ ฅ ์๋ ์ธ์ ์ ํ๋๋ฅผ ๋ณด์ฌ์ค๋ค. ์ถ๊ฐ์ ์ผ๋ก 8-bit ์์ํ๋ฅผ ์ ์ฉํ์ฌ ๋ฉ๋ชจ๋ฆฌ ํฌ๊ธฐ์ ์ฐ์ฐ ์๊ฐ์ ๊ฐ์ ์์ผฐ๋ค. ํด๋น ์์คํ
์ 0.4์ด์ ์ด๋ก ์ ์ง์ฐ์๊ฐ์ ๊ฐ์ง๋ฉฐ 900MHz์ CPU ์์์ 0.2์ RTF๋ก ๋์ํ์๋ค.
์ถ๊ฐ์ ์ผ๋ก, ๊น์ด๋ณ ์ปจ๋ฒ๋ฃจ์
์ธ์ฝ๋๋ฅผ ์ด์ฉํ ์ดํ
์
๊ธฐ๋ฐ ๋ชจ๋ธ์ด ๊ฐ๋ฐ๋์๋ค. ์ปจ๋ฒ๋ฃจ์
๊ธฐ๋ฐ์ ์ธ์ฝ๋๋ ์ฌ๊ทํ ์ธ๊ณต ์ ๊ฒฝ๋ง ๊ธฐ๋ฐ ๋ชจ๋ธ๋ณด๋ค ๋น ๋ฅธ ์ฒ๋ฆฌ ์๋๋ฅผ ๊ฐ์ง๋ค. ํ์ง๋ง ์ปจ๋ฒ๋ฃจ์
๋ชจ๋ธ์ ๋์ ์ฑ๋ฅ์ ์ํด์ ํฐ ์
๋ ฅ ๋ฒ์๋ฅผ ํ์๋ก ํ๋ค. ์ด๋ ๋ชจ๋ธ ํฌ๊ธฐ ๋ฐ ์ฐ์ฐ๋, ๊ทธ๋ฆฌ๊ณ ๋์ ์ ๋ฉ๋ชจ๋ฆฌ ์๋ชจ๋ฅผ ์ฆ๊ฐ ์ํจ๋ค. ์์ ํฌ๊ธฐ์ ์
๋ ฅ ๋ฒ์๋ฅผ ๊ฐ์ง๋ ์ปจ๋ฒ๋ฃจ์
์ธ์ฝ๋๋ ์ถ๋ ฅ์ ๋ฐ๋ณต์ด๋ ์๋ต์ผ๋ก ์ธํ์ฌ ๋์ ์ค์ฐจ์จ์ ๊ฐ์ง๋ค. ์ด๊ฒ์ ์ปจ๋ฒ๋ฃจ์
์ ์๊ฐ ๋ถ๋ณ์ฑ ๋๋ฌธ์ผ๋ก ์ฌ๊ฒจ์ง๋ฉฐ, ์ด ๋ฌธ์ ๋ฅผ ์์น ์ธ์ฝ๋ฉ ๋ฒกํฐ๋ฅผ ์ด์ฉํ์ฌ ํด๊ฒฐํ์๋ค. ์์น ์ ๋ณด๋ฅผ ์ด์ฉํ์ฌ ์์ ํฌ๊ธฐ์ ํํฐ๋ฅผ ๊ฐ์ง๋ ์ปจ๋ฒ๋ฃจ์
๋ชจ๋ธ์ ์ฑ๋ฅ์ ๋์ผ ์ ์์์ ๋ณด์๋ค. ๋ํ ์์น ์ ๋ณด๊ฐ ๊ฐ์ง๋ ์ํฅ์ ์๊ฐํ ํ์๋ค. ํด๋น ๋ฐฉ๋ฒ์ ๋จ์กฐ ์ดํ
์
์ ์ด์ฉํ ๋ชจ๋ธ์ ํ์ฉํ์ฌ ์ปจ๋ฒ๋ฃจ์
๊ธฐ๋ฐ์ ์คํธ๋ฆฌ๋ฐ ๊ฐ๋ฅํ ์์ฑ ์ธ์ ์์คํ
์ ๊ฐ๋ฐํ์๋ค.1 Introduction 1
1.1 End-to-End Automatic Speech Recognition with Neural Networks . . 1
1.2 Challenges on On-device Implementation of Neural Network-based ASR 2
1.3 Parallelizable Neural Network Architecture 3
1.4 Scope of Dissertation 3
2 Simple Recurrent Units for CTC-based End-to-End Speech Recognition 6
2.1 Introduction 6
2.2 Related Works 8
2.3 Speech Recognition Algorithm 9
2.3.1 Acoustic modeling 10
2.3.2 Character-based model 12
2.3.3 Word piece-based model 14
2.3.4 Decoding 14
2.4 Experimental Results 15
2.4.1 Acoustic models 15
2.4.2 Word piece based speech recognition 22
2.4.3 Execution time analysis 25
2.5 Concluding Remarks 27
3 Low-Latency Lightweight Streaming Speech Recognition with 8-bit Quantized Depthwise Gated Convolutional Neural Networks 28
3.1 Introduction 28
3.2 Simple Gated Convolutional Networks 30
3.2.1 Model structure 30
3.2.2 Multi-time-step parallelization 31
3.3 Training CTC AM with SGCN 34
3.3.1 Regularization with symmetrical weight noise injection 34
3.3.2 8-bit quantization 34
3.4 Experimental Results 36
3.4.1 Experimental setting 36
3.4.2 Results on WSJ eval92 38
3.4.3 Implementation on the embedded system 38
3.5 Concluding Remarks 39
4 Effect of Adding Positional Information on Convolutional Neural Networks for End-to-End Speech Recognition 41
4.1 Introduction 41
4.2 Related Works 43
4.3 Model Description 45
4.4 Experimental Results 46
4.4.1 Effect of receptive field size 46
4.4.2 Visualization 49
4.4.3 Comparison with other models 53
4.5 Concluding Remarks 53
5 Convolution-based Attention Model with Positional Encoding for Streaming Speech Recognition 55
5.1 Introduction 55
5.2 Related Works 58
5.3 End-to-End Model for Speech Recognition 61
5.3.1 Model description 61
5.3.2 Monotonic chunkwise attention 62
5.3.3 Positional encoding 63
5.4 Experimental Results 64
5.4.1 Effect of positional encoding 66
5.4.2 Comparison with other models 68
5.4.3 Execution time analysis 70
5.5 Concluding Remarks 71
6 Conclusion 72
Abstract (In Korean) 86Docto
Effective attention-based sequence-to-sequence modelling for automatic speech recognition
With sufficient training data, attentional encoder-decoder models have given outstanding ASR results. In such models, the encoder encodes the input sequence into a sequence of hidden representations. The attention mechanism generates a soft alignment
between the encoder hidden states and the decoder hidden states. The decoder produces the current output by considering the alignment and the previous outputs.
However, attentional encoder-decoder models are originally designed for machine
translation tasks, where the input and output sequences are relatively short and the
alignments between them are flexible. For ASR tasks, the input sequences are notably
long. Further, acoustic frames (or their hidden representations) typically can be aligned
with output units in a left-to-right order, and compared to the length of the entire utterance, the duration of each output unit is usually small. Conventional encoder-decoder
models have difficulties in modelling long sequences, and the attention mechanism
does not guarantee the monotonic left-to-right alignments.
In this thesis, we study attention-based sequence-to-sequence ASR models and
address the aforementioned issues. We investigate recurrent neural network (RNN)
encoder-decoder models and self-attention encoder-decoder models. For RNN encoder-decoder models, we develop a dynamic subsampling RNN (dsRNN) encoder to shorten
the lengths of the input sequences. The dsRNN learns to skip redundant frames. Furthermore, the skip ratio may vary at different stages of training, thus allowing the
encoder to learn the most relevant information for each epoch. Thus, the dsRNN alleviates the difficulties of encoding long sequences. We also propose a fully trainable
windowed attention mechanism, in which both the window shift and window length
are learned by the model. Our windowed method forces the attention mechanism to
attend inputs within small sliding windows in a strict left-to-right order. The proposed
dsRNN and windowed attention give significant performance gains over traditional
encoder-decoder ASR models.
We next study self-attention encoder-decoder models. For RNN encoder-decoder
models, we have shown that restricting the attention within small windows is beneficial. However, self-attention encodes input sequences by comparing each element
of the sequence with all other elements of the sequence. Therefore, we investigate if
the global view of self-attention is necessary for ASR. We note that the range of the
learned context increases from the lower to the upper self-attention layers, and suggest
that the upper encoder layers may have seen sufficient contextual information without
the need for self-attention. This would imply that the upper self-attention layers can
be replaced with feed-forward layers (we can view the feed-forward layers as strict
local left-to-right self-attention). In practice, we observe replacing upper encoder self-attention layers with feed forward layers does not impact the performance. We also
observe that there are individual attention heads that only attend local information, and
thus the self-attention mechanism is redundant for these attention heads. Based on
these observations, we propose randomly removing attention heads during training but
keep all heads at testing. The proposed method achieves state-of-the-art ASR results
on benchmark datasets of different ASR scenarios.
Finally, we investigate top-down level-wise training of sequence-to-sequence ASR
models. We find that when training sequence-to-sequence ASR models on noisy data,
the use of upper layers trained on clean data forces the lower layers to learn noise-invariant features, since the features which fit the clean-trained upper layers are more
general. We further show that within the same dataset, conventional joint training
makes the upper layers quickly overfit. Therefore, we propose to freeze the upper
layers and retrain the lower layers. The proposed method is a general training strategy;
we use it not only to train ASR models but also to train other neural networks in other
domains. The proposed training method yields consistent performance gains across
different tasks (e.g., language modelling, image classification).
In summary, we propose methods which enable attention-based sequence-to-sequence
ASR systems to better model sequential data, and demonstrate the benefits of training
neural networks in a top-down cascade manner
Energy-Efficient Recurrent Neural Network Accelerators for Real-Time Inference
Over the past decade, Deep Learning (DL) and Deep Neural Network (DNN) have gone through a rapid development. They are now vastly applied to various applications and have profoundly changed the life of hu- man beings. As an essential element of DNN, Recurrent Neural Networks (RNN) are helpful in processing time-sequential data and are widely used in applications such as speech recognition and machine translation. RNNs are difficult to compute because of their massive arithmetic operations and large memory footprint. RNN inference workloads used to be executed on conventional general-purpose processors including Central Processing Units (CPU) and Graphics Processing Units (GPU); however, they have un- necessary hardware blocks for RNN computation such as branch predictor, caching system, making them not optimal for RNN processing. To accelerate RNN computations and outperform the performance of conventional processors, previous work focused on optimization methods on both software and hardware. On the software side, previous works mainly used model compression to reduce the memory footprint and the arithmetic operations of RNNs. On the hardware side, previous works also designed domain-specific hardware accelerators based on Field Pro- grammable Gate Arrays (FPGA) or Application Specific Integrated Circuits (ASIC) with customized hardware pipelines optimized for efficient pro- cessing of RNNs. By following this software-hardware co-design strategy, previous works achieved at least 10X speedup over conventional processors. Many previous works focused on achieving high throughput with a large batch of input streams. However, in real-time applications, such as gaming Artificial Intellegence (AI), dynamical system control, low latency is more critical. Moreover, there is a trend of offloading neural network workloads to edge devices to provide a better user experience and privacy protection. Edge devices, such as mobile phones and wearable devices, are usually resource-constrained with a tight power budget. They require RNN hard- ware that is more energy-efficient to realize both low-latency inference and long battery life. Brain neurons have sparsity in both the spatial domain and time domain. Inspired by this human nature, previous work mainly explored model compression to induce spatial sparsity in RNNs. The delta network algorithm alternatively induces temporal sparsity in RNNs and can save over 10X arithmetic operations in RNNs proven by previous works.
In this work, we have proposed customized hardware accelerators to exploit temporal sparsity in Gated Recurrent Unit (GRU)-RNNs and Long Short-Term Memory (LSTM)-RNNs to achieve energy-efficient real-time RNN inference. First, we have proposed DeltaRNN, the first-ever RNN accelerator to exploit temporal sparsity in GRU-RNNs. DeltaRNN has achieved 1.2 TOp/s effective throughput with a batch size of 1, which is 15X higher than its related works. Second, we have designed EdgeDRNN to accelerate GRU-RNN edge inference. Compared to DeltaRNN, EdgeDRNN does not rely on on-chip memory to store RNN weights and focuses on reducing off-chip Dynamic Random Access Memory (DRAM) data traffic using a more scalable architecture. EdgeDRNN have realized real-time inference of large GRU-RNNs with submillisecond latency and only 2.3 W wall plug power consumption, achieving 4X higher energy efficiency than commercial edge AI platforms like NVIDIA Jetson Nano. Third, we have used DeltaRNN to realize the first-ever continuous speech recognition sys- tem with the Dynamic Audio Sensor (DAS) as the front-end. The DAS is a neuromorphic event-driven sensor that produces a stream of asyn- chronous events instead of audio data sampled at a fixed sample rate. We have also showcased how an RNN accelerator can be integrated with an event-driven sensor on the same chip to realize ultra-low-power Keyword Spotting (KWS) on the extreme edge. Fourth, we have used EdgeDRNN to control a powered robotic prosthesis using an RNN controller to replace a conventional proportionalโderivative (PD) controller. EdgeDRNN has achieved 21 ฮผs latency of running the RNN controller and could maintain stable control of the prosthesis. We have used DeltaRNN and EdgeDRNN to solve these problems to prove their value in solving real-world problems. Finally, we have applied the delta network algorithm on LSTM-RNNs and have combined it with a customized structured pruning method, called Column-Balanced Targeted Dropout (CBTD), to induce spatio-temporal sparsity in LSTM-RNNs. Then, we have proposed another FPGA-based accelerator called Spartus, the first RNN accelerator that exploits spatio- temporal sparsity. Spartus achieved 9.4 TOp/s effective throughput with a batch size of 1, the highest among present FPGA-based RNN accelerators with a power budget around 10 W. Spartus can complete the inference of an LSTM layer having 5 million parameters within 1 ฮผs
- โฆ