24,143 research outputs found
Scalable Compression of Deep Neural Networks
Deep neural networks generally involve some layers with mil- lions of
parameters, making them difficult to be deployed and updated on devices with
limited resources such as mobile phones and other smart embedded systems. In
this paper, we propose a scalable representation of the network parameters, so
that different applications can select the most suitable bit rate of the
network based on their own storage constraints. Moreover, when a device needs
to upgrade to a high-rate network, the existing low-rate network can be reused,
and only some incremental data are needed to be downloaded. We first
hierarchically quantize the weights of a pre-trained deep neural network to
enforce weight sharing. Next, we adaptively select the bits assigned to each
layer given the total bit budget. After that, we retrain the network to
fine-tune the quantized centroids. Experimental results show that our method
can achieve scalable compression with graceful degradation in the performance.Comment: 5 pages, 4 figures, ACM Multimedia 201
Automated construction of a hierarchy of self-organized neural network classifiers
This paper documents an effort to design and implement a neural network-based, automatic classification system which dynamically constructs and trains a decision tree. The system is a combination of neural network and decision tree technology. The decision tree is constructed to partition a large classification problem into smaller problems. The neural network modules then solve these smaller problems. We used a variant of the Fuzzy ARTMAP neural network which can be trained much more quickly than traditional neural networks. The research extends the concept of self-organization from within the neural network to the overall structure of the dynamically constructed decision hierarchy. The primary advantage is avoidance of manual tedium and subjective bias in constructing decision hierarchies. Additionally, removing the need for manual construction of the hierarchy opens up a large class of potential classification applications. When tested on data from real-world images, the automatically generated hierarchies performed slightly better than an intuitive (handbuilt) hierarchy. Because the neural networks at the nodes of the decision hierarchy are solving smaller problems, generalization performance can really be improved if the number of features used to solve these problems is reduced. Algorithms for automatically selecting which features to use for each individual classification module were also implemented. We were able to achieve the same level of performance as in previous manual efforts, but in an efficient, automatic manner. The technology developed has great potential in a number of commercial areas, including data mining, pattern recognition, and intelligent interfaces for personal computer applications. Sample applications include: fraud detection, bankruptcy prediction, data mining agent, scalable object recognition system, email agent, resource librarian agent, and a decision aid agent
Performance Analysis of Hierarchical Routing Protocols in Wireless Sensor Networks
This work focusses on analyzing the optimization strategies of routing
protocols with respect to energy utilization of sensor nodes in Wireless Sensor
Network (WSNs). Different routing mechanisms have been proposed to address
energy optimization problem in sensor nodes. Clustering mechanism is one of the
popular WSNs routing mechanisms. In this paper, we first address energy
limitation constraints with respect to maximizing network life time using
linear programming formulation technique. To check the efficiency of different
clustering scheme against modeled constraints, we select four cluster based
routing protocols; Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold
Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol
(SEP), and Distributed Energy Efficient Clustering (DEEC). To validate our
mathematical framework, we perform analytical simulations in MATLAB by choosing
number of alive nodes, number of dead nodes, number of packets and number of
CHs, as performance metrics.Comment: NGWMN with 7th IEEE International Conference on Broadband and
Wireless Computing, Communication and Applications (BWCCA 2012), Victoria,
Canada, 201
Fast Approximate -Means via Cluster Closures
-means, a simple and effective clustering algorithm, is one of the most
widely used algorithms in multimedia and computer vision community. Traditional
-means is an iterative algorithm---in each iteration new cluster centers are
computed and each data point is re-assigned to its nearest center. The cluster
re-assignment step becomes prohibitively expensive when the number of data
points and cluster centers are large.
In this paper, we propose a novel approximate -means algorithm to greatly
reduce the computational complexity in the assignment step. Our approach is
motivated by the observation that most active points changing their cluster
assignments at each iteration are located on or near cluster boundaries. The
idea is to efficiently identify those active points by pre-assembling the data
into groups of neighboring points using multiple random spatial partition
trees, and to use the neighborhood information to construct a closure for each
cluster, in such a way only a small number of cluster candidates need to be
considered when assigning a data point to its nearest cluster. Using complexity
analysis, image data clustering, and applications to image retrieval, we show
that our approach out-performs state-of-the-art approximate -means
algorithms in terms of clustering quality and efficiency
- …