23,111 research outputs found
Stochastic Downsampling for Cost-Adjustable Inference and Improved Regularization in Convolutional Networks
It is desirable to train convolutional networks (CNNs) to run more
efficiently during inference. In many cases however, the computational budget
that the system has for inference cannot be known beforehand during training,
or the inference budget is dependent on the changing real-time resource
availability. Thus, it is inadequate to train just inference-efficient CNNs,
whose inference costs are not adjustable and cannot adapt to varied inference
budgets. We propose a novel approach for cost-adjustable inference in CNNs -
Stochastic Downsampling Point (SDPoint). During training, SDPoint applies
feature map downsampling to a random point in the layer hierarchy, with a
random downsampling ratio. The different stochastic downsampling configurations
known as SDPoint instances (of the same model) have computational costs
different from each other, while being trained to minimize the same prediction
loss. Sharing network parameters across different instances provides
significant regularization boost. During inference, one may handpick a SDPoint
instance that best fits the inference budget. The effectiveness of SDPoint, as
both a cost-adjustable inference approach and a regularizer, is validated
through extensive experiments on image classification
Deep Pyramidal Residual Networks
Deep convolutional neural networks (DCNNs) have shown remarkable performance
in image classification tasks in recent years. Generally, deep neural network
architectures are stacks consisting of a large number of convolutional layers,
and they perform downsampling along the spatial dimension via pooling to reduce
memory usage. Concurrently, the feature map dimension (i.e., the number of
channels) is sharply increased at downsampling locations, which is essential to
ensure effective performance because it increases the diversity of high-level
attributes. This also applies to residual networks and is very closely related
to their performance. In this research, instead of sharply increasing the
feature map dimension at units that perform downsampling, we gradually increase
the feature map dimension at all units to involve as many locations as
possible. This design, which is discussed in depth together with our new
insights, has proven to be an effective means of improving generalization
ability. Furthermore, we propose a novel residual unit capable of further
improving the classification accuracy with our new network architecture.
Experiments on benchmark CIFAR-10, CIFAR-100, and ImageNet datasets have shown
that our network architecture has superior generalization ability compared to
the original residual networks. Code is available at
https://github.com/jhkim89/PyramidNet}Comment: Accepted to CVPR 201
tsdownsample: high-performance time series downsampling for scalable visualization
Interactive line chart visualizations greatly enhance the effective
exploration of large time series. Although downsampling has emerged as a
well-established approach to enable efficient interactive visualization of
large datasets, it is not an inherent feature in most visualization tools.
Furthermore, there is no library offering a convenient interface for
high-performance implementations of prominent downsampling algorithms. To
address these shortcomings, we present tsdownsample, an open-source Python
package specifically designed for CPU-based, in-memory time series
downsampling. Our library focuses on performance and convenient integration,
offering optimized implementations of leading downsampling algorithms. We
achieve this optimization by leveraging low-level SIMD instructions and
multithreading capabilities in Rust. In particular, SIMD instructions were
employed to optimize the argmin and argmax operations. This SIMD optimization,
along with some algorithmic tricks, proved crucial in enhancing the performance
of various downsampling algorithms. We evaluate the performance of tsdownsample
and demonstrate its interoperability with an established visualization
framework. Our performance benchmarks indicate that the algorithmic runtime of
tsdownsample approximates the CPU's memory bandwidth. This work marks a
significant advancement in bringing high-performance time series downsampling
to the Python ecosystem, enabling scalable visualization. The open-source code
can be found at https://github.com/predict-idlab/tsdownsampleComment: Submitted to Software
- …