66,077 research outputs found
On Optimizing Energy Efficiency in Multi-Radio Multi-Channel Wireless Networks
Multi-radio multi-channel (MR-MC) networks contribute significant enhancement
in the network throughput by exploiting multiple radio interfaces and
non-overlapping channels. While throughput optimization is one of the main
targets in allocating resource in MR-MC networks, recently, the network energy
efficiency is becoming a more and more important concern. Although turning on
more radios and exploiting more channels for communication is always beneficial
to network capacity, they may not be necessarily desirable from an energy
efficiency perspective. The relationship between these two often conflicting
objectives has not been well-studied in many existing works. In this paper, we
investigate the problem of optimizing energy efficiency under full capacity
operation in MR-MC networks and analyze the optimal choices of numbers of
radios and channels. We provide detailed problem formulation and solution
procedures. In particular, for homogeneous commodity networks, we derive a
theoretical upper bound of the optimal energy efficiency and analyze the
conditions under which such optimality can be achieved. Numerical results
demonstrate that the achieved optimal energy efficiency is close to the
theoretical upper bound.Comment: 6 pages, 5 figures, Accepted to Globecom 201
Butterfly-Net: Optimal Function Representation Based on Convolutional Neural Networks
Deep networks, especially convolutional neural networks (CNNs), have been
successfully applied in various areas of machine learning as well as to
challenging problems in other scientific and engineering fields. This paper
introduces Butterfly-Net, a low-complexity CNN with structured and sparse
cross-channel connections, together with a Butterfly initialization strategy
for a family of networks. Theoretical analysis of the approximation power of
Butterfly-Net to the Fourier representation of input data shows that the error
decays exponentially as the depth increases. Combining Butterfly-Net with a
fully connected neural network, a large class of problems are proved to be well
approximated with network complexity depending on the effective frequency
bandwidth instead of the input dimension. Regular CNN is covered as a special
case in our analysis. Numerical experiments validate the analytical results on
the approximation of Fourier kernels and energy functionals of Poisson's
equations. Moreover, all experiments support that training from Butterfly
initialization outperforms training from random initialization. Also, adding
the remaining cross-channel connections, although significantly increase the
parameter number, does not much improve the post-training accuracy and is more
sensitive to data distribution
Statistical analysis of motion contrast in optical coherence tomography angiography
Optical coherence tomography angiography (Angio-OCT), mainly based on the
temporal dynamics of OCT scattering signals, has found a range of potential
applications in clinical and scientific research. Based on the model of random
phasor sums, temporal statistics of the complex-valued OCT signals are
mathematically described. Statistical distributions of the amplitude
differential and complex differential Angio-OCT signals are derived. The
theories are validated through the flow phantom and live animal experiments.
Using the model developed, the origin of the motion contrast in Angio-OCT is
mathematically explained, and the implications in the improvement of motion
contrast are further discussed, including threshold determination and its
residual classification error, averaging method, and scanning protocol. The
proposed mathematical model of Angio-OCT signals can aid in the optimal design
of the system and associated algorithms.Comment: 11 pages, 11 figure
A Survey of Learning Causality with Data: Problems and Methods
This work considers the question of how convenient access to copious data
impacts our ability to learn causal effects and relations. In what ways is
learning causality in the era of big data different from -- or the same as --
the traditional one? To answer this question, this survey provides a
comprehensive and structured review of both traditional and frontier methods in
learning causality and relations along with the connections between causality
and machine learning. This work points out on a case-by-case basis how big data
facilitates, complicates, or motivates each approach.Comment: 35 pages, accepted by ACM CSU
First-principles study of electronic structure, optical and phonon properties of {\alpha}-ZrW2O8
ZrW2O8 exhibits isotropic negative thermal expansions over its entire
temperature range of stability, yet so far its physical properties and
mechanism have not been fully addressed. In this article, the electronic
structure, elastic, thermal, optical and phonon properties of {\alpha}-ZrW2O8
are systematically investigated from first principles. The agreements between
the generalized gradient approximation (GGA) calculation and experiments are
found to be quite satisfactory. The calculation results can be useful in
relevant material designs, e.g., when ZrW2O8 is employed to adjust the thermal
expansion coefficient of ceramic matrix composites.Comment: 12 pages, 5 figures, 1 table and 29 reference
Community Detection in Signed Networks: an Error-Correcting Code Approach
In this paper, we consider the community detection problem in signed
networks, where there are two types of edges: positive edges (friends) and
negative edges (enemies). One renowned theorem of signed networks, known as
Harary's theorem, states that structurally balanced signed networks are
clusterable. By viewing each cycle in a signed network as a parity-check
constraint, we show that the community detection problem in a signed network
with two communities is equivalent to the decoding problem for a parity-check
code. We also show how one can use two renowned decoding algorithms in error-
correcting codes for community detection in signed networks: the bit-flipping
algorithm, and the belief propagation algorithm. In addition to these two
algorithms, we also propose a new community detection algorithm, called the
Hamming distance algorithm, that performs community detection by finding a
codeword that minimizes the Hamming distance. We compare the performance of
these three algorithms by conducting various experiments with known ground
truth. Our experimental results show that our Hamming distance algorithm
outperforms the other two
A divisive spectral method for network community detection
Community detection is a fundamental problem in the domain of complex-network
analysis. It has received great attention, and many community detection methods
have been proposed in the last decade. In this paper, we propose a divisive
spectral method for identifying community structures from networks, which
utilizes a sparsification operation to pre-process the networks first, and then
uses a repeated bisection spectral algorithm to partition the networks into
communities. The sparsification operation makes the community boundaries more
clearer and more sharper, so that the repeated spectral bisection algorithm
extract high-quality community structures accurately from the sparsified
networks. Experiments show that the combination of network sparsification and
spectral bisection algorithm is highly successful, the proposed method is more
effective in detecting community structures from networks than the others.Comment: 23pages, 10 figures, and 2 table
VecQ: Minimal Loss DNN Model Compression With Vectorized Weight Quantization
Quantization has been proven to be an effective method for reducing the
computing and/or storage cost of DNNs. However, the trade-off between the
quantization bitwidth and final accuracy is complex and non-convex, which makes
it difficult to be optimized directly. Minimizing direct quantization loss
(DQL) of the coefficient data is an effective local optimization method, but
previous works often neglect the accurate control of the DQL, resulting in a
higher loss of the final DNN model accuracy. In this paper, we propose a novel
metric called Vector Loss. Based on this new metric, we develop a new
quantization solution called VecQ, which can guarantee minimal direct
quantization loss and better model accuracy. In addition, in order to speed up
the proposed quantization process during model training, we accelerate the
quantization process with a parameterized probability estimation method and
template-based derivation calculation. We evaluate our proposed algorithm on
MNIST, CIFAR, ImageNet, IMDB movie review and THUCNews text data sets with
numerical DNN models. The results demonstrate that our proposed quantization
solution is more accurate and effective than the state-of-the-art approaches
yet with more flexible bitwidth support. Moreover, the evaluation of our
quantized models on Saliency Object Detection (SOD) tasks maintains comparable
feature extraction quality with up to 16 weight size reduction.Comment: 14 pages, 9 figures, Journa
CO Core Candidates in the Gemini Molecular Cloud
We present observations of a 4 squared degree area toward the Gemini cloud
obtained using J = 1-0 transitions of CO, CO and CO. No
CO emission was detected. This region is composed of 36 core candidates
of CO. These core candidates have a characteristic diameter of 0.25 pc,
excitation temperatures of 7.9 K, line width of 0.54 km s and a mean
mass of 1.4 M_{\sun}. They are likely to be starless core candidates, or
transient structures, which probably disperse after 10 yr.Comment: Accepted for Publication in AJ, 23 Pages, 15 figure
Recent Advances in Efficient Computation of Deep Convolutional Neural Networks
Deep neural networks have evolved remarkably over the past few years and they
are currently the fundamental tools of many intelligent systems. At the same
time, the computational complexity and resource consumption of these networks
also continue to increase. This will pose a significant challenge to the
deployment of such networks, especially in real-time applications or on
resource-limited devices. Thus, network acceleration has become a hot topic
within the deep learning community. As for hardware implementation of deep
neural networks, a batch of accelerators based on FPGA/ASIC have been proposed
in recent years. In this paper, we provide a comprehensive survey of recent
advances in network acceleration, compression and accelerator design from both
algorithm and hardware points of view. Specifically, we provide a thorough
analysis of each of the following topics: network pruning, low-rank
approximation, network quantization, teacher-student networks, compact network
design and hardware accelerators. Finally, we will introduce and discuss a few
possible future directions.Comment: 14 pages, 3 figure
- …
