191 research outputs found
Successive Interference Cancellation and Fractional Frequency Reuse For LTE Uplink Communications
Cellular networks are increasingly densified to deal with fast growing wireless traffic. Interference mitigation plays a key role for the dense cellular networks. Successive interference cancellation (SIC) and fractional frequency reuse (FFR) are two representative inter-cell interference (ICI) mitigation techniques. In this paper we study the application of both SIC and FFR for LTE uplink networks, and develop an analytical model to investigate their interactions and impact on network performance. The performance gains with FFR and SIC are related to key system functionalities and variables, such as SIC parameters, FFR bandwidth partition, uplink power control and sector antennas. The ICIs from individual cell sectors are approximated by log-normal random variables, which enables low complexity computation of the aggregate ICI with FFR and SIC. Then network performance of site throughput and outage probability is computed. The model is fast and has small modelling deviation, which is validated by system level simulations. Numerical results show that both SIC and FFR can largely improve network performance, but SIC has an impact over FFR. In addition, most of the network performance gains with SIC could be obtained with a small number of SIC stages applied to a few sectors
Tuning micropillar cavity birefringence by laser induced surface defects
We demonstrate a technique to tune the optical properties of micropillar
cavities by creating small defects on the sample surface near the cavity region
with an intense focused laser beam. Such defects modify strain in the
structure, changing the birefringence in a controllable way. We apply the
technique to make the fundamental cavity mode polarization-degenerate and to
fine tune the overall mode frequencies, as needed for applications in quantum
information science.Comment: RevTex, 7 pages, 4 figures (accepted for publication in Applied
Physics Letters
CNOT and Bell-state analysis in the weak-coupling cavity QED regime
We propose an interface between the spin of a photon and the spin of an
electron confined in a quantum dot embedded in a microcavity operating in the
weak coupling regime. This interface, based on spin selective photon reflection
from the cavity, can be used to construct a CNOT gate, a multi-photon entangler
and a photonic Bell-state analyzer. Finally, we analyze experimental
feasibility, concluding that the schemes can be implemented with current
technology.Comment: 4 pages, 2 figure
SpliceMix: A Cross-scale and Semantic Blending Augmentation Strategy for Multi-label Image Classification
Recently, Mix-style data augmentation methods (e.g., Mixup and CutMix) have
shown promising performance in various visual tasks. However, these methods are
primarily designed for single-label images, ignoring the considerable
discrepancies between single- and multi-label images, i.e., a multi-label image
involves multiple co-occurred categories and fickle object scales. On the other
hand, previous multi-label image classification (MLIC) methods tend to design
elaborate models, bringing expensive computation. In this paper, we introduce a
simple but effective augmentation strategy for multi-label image
classification, namely SpliceMix. The "splice" in our method is two-fold: 1)
Each mixed image is a splice of several downsampled images in the form of a
grid, where the semantics of images attending to mixing are blended without
object deficiencies for alleviating co-occurred bias; 2) We splice mixed images
and the original mini-batch to form a new SpliceMixed mini-batch, which allows
an image with different scales to contribute to training together. Furthermore,
such splice in our SpliceMixed mini-batch enables interactions between mixed
images and original regular images. We also offer a simple and non-parametric
extension based on consistency learning (SpliceMix-CL) to show the flexible
extensibility of our SpliceMix. Extensive experiments on various tasks
demonstrate that only using SpliceMix with a baseline model (e.g., ResNet)
achieves better performance than state-of-the-art methods. Moreover, the
generalizability of our SpliceMix is further validated by the improvements in
current MLIC methods when married with our SpliceMix. The code is available at
https://github.com/zuiran/SpliceMix.Comment: 13 pages, 10 figure
Free-Form Composition Networks for Egocentric Action Recognition
Egocentric action recognition is gaining significant attention in the field
of human action recognition. In this paper, we address data scarcity issue in
egocentric action recognition from a compositional generalization perspective.
To tackle this problem, we propose a free-form composition network (FFCN) that
can simultaneously learn disentangled verb, preposition, and noun
representations, and then use them to compose new samples in the feature space
for rare classes of action videos. First, we use a graph to capture the
spatial-temporal relations among different hand/object instances in each action
video. We thus decompose each action into a set of verb and preposition
spatial-temporal representations using the edge features in the graph. The
temporal decomposition extracts verb and preposition representations from
different video frames, while the spatial decomposition adaptively learns verb
and preposition representations from action-related instances in each frame.
With these spatial-temporal representations of verbs and prepositions, we can
compose new samples for those rare classes in a free-form manner, which is not
restricted to a rigid form of a verb and a noun. The proposed FFCN can directly
generate new training data samples for rare classes, hence significantly
improve action recognition performance. We evaluated our method on three
popular egocentric action recognition datasets, Something-Something V2, H2O,
and EPIC-KITCHENS-100, and the experimental results demonstrate the
effectiveness of the proposed method for handling data scarcity problems,
including long-tailed and few-shot egocentric action recognition
Cutting Down Electricity Cost in Internet Data Centers by Using Energy Storage
Abstract—Electricity consumption comprises a significant frac-tion of total operating cost in data centers. System operators are required to reduce electricity bill as much as possible. In this paper, we consider utilizing available energy storage capability in data centers to reduce electricity bill under real-time electricity market. Laypunov optimization technique is applied to design an algorithm that achieves an explicit tradeoff between cost saving and energy storage capacity. As far as we know, our work is the first to explore the problem of electricity cost saving using energy storage in multiple data centers by considering both time-diversity and location-diversity of electricity price. Index Terms—Cloud computing, electricity cost, data center, energy storage, Laypunov optimization I
Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural Networks
Graph Neural Networks (GNNs) tend to suffer from high computation costs due
to the exponentially increasing scale of graph data and the number of model
parameters, which restricts their utility in practical applications. To this
end, some recent works focus on sparsifying GNNs with the lottery ticket
hypothesis (LTH) to reduce inference costs while maintaining performance
levels. However, the LTH-based methods suffer from two major drawbacks: 1) they
require exhaustive and iterative training of dense models, resulting in an
extremely large training computation cost, and 2) they only trim graph
structures and model parameters but ignore the node feature dimension, where
significant redundancy exists. To overcome the above limitations, we propose a
comprehensive graph gradual pruning framework termed CGP. This is achieved by
designing a during-training graph pruning paradigm to dynamically prune GNNs
within one training process. Unlike LTH-based methods, the proposed CGP
approach requires no re-training, which significantly reduces the computation
costs. Furthermore, we design a co-sparsifying strategy to comprehensively trim
all three core elements of GNNs: graph structures, node features, and model
parameters. Meanwhile, aiming at refining the pruning operation, we introduce a
regrowth process into our CGP framework, in order to re-establish the pruned
but important connections. The proposed CGP is evaluated by using a node
classification task across 6 GNN architectures, including shallow models (GCN
and GAT), shallow-but-deep-propagation models (SGC and APPNP), and deep models
(GCNII and ResGCN), on a total of 14 real-world graph datasets, including
large-scale graph datasets from the challenging Open Graph Benchmark.
Experiments reveal that our proposed strategy greatly improves both training
and inference efficiency while matching or even exceeding the accuracy of
existing methods.Comment: 29 pages, 27 figures, submitting to IEEE TNNL
- …