43 research outputs found
DOES BILATERAL TRADE LEAD TO INCOME CONVERGENCE? PANEL EVIDENCE
Through panel-data regressions, we found that both per capita income level and growth turn out to converge when the trade intensity ratio increases between the countries. Geographical proximity and language similarities also turn out to be associated with convergence in both income level and growth.Trade, Convergence, Distance, Language
Portfolio-Flow Volatility and Demand for International Reserves
This paper examines the importance of portfolio-flow volatility as a determinant of the demand for international reserves over the 1980-99 period, Using panel data, we find that portfolio-flow volatility significantly raises the level of reserve holdings. Especially reserve accumulation is most sensitive to the volatility of portfolio balance (net flows). Capital account liberalization has increased uncertainty in the world economy, thereby making open economies more vulnerable to international financial crises. The regression results imply that monetary authorities have accumulated more precautionary reserve balances against increased uncertainty in portfolio flows as capital account liberalization progresses. As in previous studies, real openness is an important explanatory factor in determining the demand for reserves
Short-Term Load Forecasting with Missing Data using Dilated Recurrent Attention Networks
Poster presented at the Northern Lights Deep Learning Workshop (NLDL) 2020, 19.01.20 - 21.01.20, UiT The Arctic University of Norway, TromsøForecasting the dynamics of time-varying systems is essential to maintaining the sustainability of the systems.
Recent studies have discovered that Recurrent Neural Networks(RNN) applied in the forecasting tasks outperform conventional models that include AutoRegressive Integrated Moving Average(ARIMA).
However, due to the structural limitation of vanilla RNN which holds unit-length internal connections, learning the representation of time series with missing data can be severely biased.
The goal of this paper is to provide a robust RNN architecture against the bias from missing data.
We propose Dilated Recurrent Attention Networks (DRAN).
The proposed model has a stacked structure of multiple RNNs which layer of each having a different length of internal connections.
This structure allows incorporating previous information at different time scales.
DRAN updates its state by a weighted average of the layers.
In order to focus more on the layer that carries reliable information against bias from missing data, it leverages attention mechanism which learns the distribution of attention weights among the layers.
We report that our model outperforms conventional ones with respect to the forecast accuracy from two benchmark datasets, including a real-world electricity load dataset
Deep Semi-Supervised Semantic Segmentation in Multi-Frequency Echosounder Data
Multi-frequency echosounder data can provide a broad understanding of the underwater environment in a non-invasive manner. The analysis of echosounder data is, hence, a topic of great importance for the marine ecosystem. Semantic segmentation, a deep learning based analysis method predicting the class attribute of each acoustic intensity, has recently been in the spotlight of the fisheries and aquatic industry since its result can be used to estimate the abundance of the marine organisms. However, a fundamental problem with current methods is the massive reliance on the availability of large amounts of annotated training data, which can only be acquired through expensive handcrafted annotation processes, making such approaches unrealistic in practice. As a solution to this challenge, we propose a novel approach, where we leverage a small amount of annotated data (supervised deep learning) and a large amount of readily available unannotated data (unsupervised learning), yielding a new data-efficient and accurate semi-supervised semantic segmentation method, all embodied into a single end-to-end trainable convolutional neural networks architecture. Our method is evaluated on representative data from a sandeel survey in the North Sea conducted by the Norwegian Institute of Marine Research. The rigorous experiments validate that our method achieves comparable results utilizing only 40 percent of the annotated data on which the supervised method is trained, by leveraging unannotated data. The code is available at https://github.com/SFI-Visual-Intelligence/PredKlus-semisup-segmentation.Deep Semi-Supervised Semantic Segmentation in Multi-Frequency Echosounder DatasubmittedVersionsubmittedVersionsubmittedVersio
Deep Semi-Supervised Semantic Segmentation in Multi-Frequency Echosounder Data
Multi-frequency echosounder data can provide a broad understanding of the underwater environment in a non-invasive manner. The analysis of echosounder data is, hence, a topic of great importance for the marine ecosystem. Semantic segmentation, a deep learning based analysis method predicting the class attribute of each acoustic intensity, has recently been in the spotlight of the fisheries and aquatic industry since its result can be used to estimate the abundance of the marine organisms. However, a fundamental problem with current methods is the massive reliance on the availability of large amounts of annotated training data, which can only be acquired through expensive handcrafted annotation processes, making such approaches unrealistic in practice. As a solution to this challenge, we propose a novel approach, where we leverage a small amount of annotated data (supervised deep learning) and a large amount of readily available unannotated data (unsupervised learning), yielding a new data-efficient and accurate semi-supervised semantic segmentation method, all embodied into a single end-to-end trainable convolutional neural networks architecture. Our method is evaluated on representative data from a sandeel survey in the North Sea conducted by the Norwegian Institute of Marine Research. The rigorous experiments validate that our method achieves comparable results utilizing only 40 percent of the annotated data on which the supervised method is trained, by leveraging unannotated data. The code is available at https://github.com/SFI-Visual-Intelligence/PredKlus-semisup-segmentation.Deep Semi-Supervised Semantic Segmentation in Multi-Frequency Echosounder DatasubmittedVersionsubmittedVersionsubmittedVersionacceptedVersio
Robust Discriminative Metric Learning for Image Representation
Metric learning has attracted significant attentions in the past decades, for the appealing advances in various realworld applications such as person re-identification and face recognition. Traditional supervised metric learning attempts to seek a discriminative metric, which could minimize the pairwise distance of within-class data samples, while maximizing the pairwise distance of data samples from various classes. However, it is still a challenge to build a robust and discriminative metric, especially for corrupted data in the real-world application. In this paper, we propose a Robust Discriminative Metric Learning algorithm (RDML) via fast low-rank representation and denoising strategy. To be specific, the metric learning problem is guided by a discriminative regularization by incorporating the pair-wise or class-wise information. Moreover, low-rank basis learning is jointly optimized with the metric to better uncover the global data structure and remove noise. Furthermore, fast low-rank representation is implemented to mitigate the computational burden and make sure the scalability on large-scale datasets. Finally, we evaluate our learned metric on several challenging tasks, e.g., face recognition/verification, object recognition, and image clustering. The experimental results verify the effectiveness of the proposed algorithm by comparing to many metric learning algorithms, even deep learning ones
Semi-supervised target classification in multi-frequency echosounder data
Acoustic target classification in multi-frequency echosounder data is a major interest for the marine ecosystem and fishery management since it can potentially estimate the abundance or biomass of the species. A key problem of current methods is the heavy dependence on the manual categorization of data samples. As a solution, we propose a novel semi-supervised deep learning method leveraging a few annotated data samples together with vast amounts of unannotated data samples, all in a single model. Specifically, two inter-connected objectives, namely, a clustering objective and a classification objective, optimize one shared convolutional neural network in an alternating manner. The clustering objective exploits the underlying structure of all data, both annotated and unannotated; the classification objective enforces a certain consistency to given classes using the few annotated data samples. We evaluate our classification method using echosounder data from the sandeel case study in the North Sea. In the semi-supervised setting with only a tenth of the training data annotated, our method achieves 67.6% accuracy, outperforming a conventional semi-supervised method by 7.0 percentage points. When applying the proposed method in a fully supervised setup, we achieve 74.7% accuracy, surpassing the standard supervised deep learning method by 4.7 percentage points.publishedVersio
Learning to Quantize Deep Networks by Optimizing Quantization Intervals with Task Loss
Reducing bit-widths of activations and weights of deep networks makes it
efficient to compute and store them in memory, which is crucial in their
deployments to resource-limited devices, such as mobile phones. However,
decreasing bit-widths with quantization generally yields drastically degraded
accuracy. To tackle this problem, we propose to learn to quantize activations
and weights via a trainable quantizer that transforms and discretizes them.
Specifically, we parameterize the quantization intervals and obtain their
optimal values by directly minimizing the task loss of the network. This
quantization-interval-learning (QIL) allows the quantized networks to maintain
the accuracy of the full-precision (32-bit) networks with bit-width as low as
4-bit and minimize the accuracy degeneration with further bit-width reduction
(i.e., 3 and 2-bit). Moreover, our quantizer can be trained on a heterogeneous
dataset, and thus can be used to quantize pretrained networks without access to
their training data. We demonstrate the effectiveness of our trainable
quantizer on ImageNet dataset with various network architectures such as
ResNet-18, -34 and AlexNet, on which it outperforms existing methods to achieve
the state-of-the-art accuracy