85,431 research outputs found
Sparse Linear Models applied to Power Quality Disturbance Classification
Power quality (PQ) analysis describes the non-pure electric signals that are
usually present in electric power systems. The automatic recognition of PQ
disturbances can be seen as a pattern recognition problem, in which different
types of waveform distortion are differentiated based on their features.
Similar to other quasi-stationary signals, PQ disturbances can be decomposed
into time-frequency dependent components by using time-frequency or time-scale
transforms, also known as dictionaries. These dictionaries are used in the
feature extraction step in pattern recognition systems. Short-time Fourier,
Wavelets and Stockwell transforms are some of the most common dictionaries used
in the PQ community, aiming to achieve a better signal representation. To the
best of our knowledge, previous works about PQ disturbance classification have
been restricted to the use of one among several available dictionaries. Taking
advantage of the theory behind sparse linear models (SLM), we introduce a
sparse method for PQ representation, starting from overcomplete dictionaries.
In particular, we apply Group Lasso. We employ different types of
time-frequency (or time-scale) dictionaries to characterize the PQ
disturbances, and evaluate their performance under different pattern
recognition algorithms. We show that the SLM reduce the PQ classification
complexity promoting sparse basis selection, and improving the classification
accuracy
vDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design
The most widely used machine learning frameworks require users to carefully
tune their memory usage so that the deep neural network (DNN) fits into the
DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to
study different machine learning algorithms, forcing them to either use a less
desirable network architecture or parallelize the processing across multiple
GPUs. We propose a runtime memory manager that virtualizes the memory usage of
DNNs such that both GPU and CPU memory can simultaneously be utilized for
training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory
usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a
significant reduction in memory requirements of DNNs. Similar experiments on
VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the
memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256
(requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card
containing 12 GB of memory, with 18% performance loss compared to a
hypothetical, oracular GPU with enough memory to hold the entire DNN.Comment: Published as a conference paper at the 49th IEEE/ACM International
Symposium on Microarchitecture (MICRO-49), 201
KACST Arabic Text Classification Project: Overview and Preliminary Results
Electronically formatted Arabic free-texts can be found in abundance these days on the World Wide Web, often linked to commercial enterprises and/or government organizations. Vast tracts of knowledge and relations lie hidden within these texts, knowledge that can be exploited once the correct intelligent tools have been identified and applied. For example, text mining may help with text classification and categorization. Text classification aims to automatically assign text to a predefined category based on identifiable linguistic features. Such a process has different useful applications including, but not restricted to, E-Mail spam detection, web pages content filtering, and automatic message routing. In this paper an overview of King Abdulaziz City for Science and Technology (KACST) Arabic Text Classification Project will be illustrated along with some preliminary results. This project will contribute to the better understanding and elaboration of Arabic text classification techniques
- …