2 research outputs found
CycleGANAS: Differentiable Neural Architecture Search for CycleGAN
We develop a Neural Architecture Search (NAS) framework for CycleGAN that
carries out unpaired image-to-image translation task. Extending previous NAS
techniques for Generative Adversarial Networks (GANs) to CycleGAN is not
straightforward due to the task difference and greater search space. We design
architectures that consist of a stack of simple ResNet-based cells and develop
a search method that effectively explore the large search space. We show that
our framework, called CycleGANAS, not only effectively discovers
high-performance architectures that either match or surpass the performance of
the original CycleGAN, but also successfully address the data imbalance by
individual architecture search for each translation direction. To our best
knowledge, it is the first NAS result for CycleGAN and shed light on NAS for
more complex structures
Dynamic Downlink Interference Management in LEO Satellite Networks Without Direct Communications
We investigate effective interference management for Low Earth Orbit (LEO) satellite networks that provide downlink services to ground users and share the same frequency spectrum range. Since there are multi-group LEO satellites with different constellation orbits, the ground users will experience time-varying interference due to the overlapping of main/side lobes of the satellite beams, which becomes even more challenging when the interfering satellites cannot communicate directly. To address the problem, we consider two LEO satellite groups that provide communication service in the same ground area, while competing for communication resources. We develop solutions that maximize the throughput and manage the time-varying interference under a certain level, without explicit message exchanges between the satellite groups. By exploiting statistical learning and deep reinforcement learning techniques, we develop learning-based resource allocation schemes and evaluate their performance through extensive simulations. We show their effectiveness under different reward settings and different interference managements, and demonstrate that a Deep Q-Network (DQN)-based scheme can achieve the close-to-optimal performance