38,587 research outputs found
Performance analysis of resource scheduling in LTE femtocell with hybrid access mode
Femtocell is a promising technology that intends in solving the indoor coverage problems so as to enhance the cell capacity. The overall network performance, in turn depends on the access methods used by the femtocells. The access method is used to identify about the user’s connectivity with the femtocell network. There are three access mechanisms defined in Third Generation partnership Project (3GPP) specification for Long Term Evolution (LTE) femtocells: open, closed and hybrid access mechanisms. Hybrid access mechanism is mostly preferred by the network for the effective utilization of resources. But, it is important to regulate the proper scheduling scheme for them. In this paper, scheduling in femtocell is investigated, where, among the non subscribers, preference is given to the users who have high throughput priority metric, thereby increasing overall throughput of the network
Parameterized complexity of machine scheduling: 15 open problems
Machine scheduling problems are a long-time key domain of algorithms and
complexity research. A novel approach to machine scheduling problems are
fixed-parameter algorithms. To stimulate this thriving research direction, we
propose 15 open questions in this area whose resolution we expect to lead to
the discovery of new approaches and techniques both in scheduling and
parameterized complexity theory.Comment: Version accepted to Computers & Operations Researc
Preemptive Scheduling of Equal-Length Jobs to Maximize Weighted Throughput
We study the problem of computing a preemptive schedule of equal-length jobs
with given release times, deadlines and weights. Our goal is to maximize the
weighted throughput, which is the total weight of completed jobs. In Graham's
notation this problem is described as (1 | r_j;p_j=p;pmtn | sum w_j U_j). We
provide an O(n^4)-time algorithm for this problem, improving the previous bound
of O(n^{10}) by Baptiste.Comment: gained one author and lost one degree in the complexit
SHADHO: Massively Scalable Hardware-Aware Distributed Hyperparameter Optimization
Computer vision is experiencing an AI renaissance, in which machine learning
models are expediting important breakthroughs in academic research and
commercial applications. Effectively training these models, however, is not
trivial due in part to hyperparameters: user-configured values that control a
model's ability to learn from data. Existing hyperparameter optimization
methods are highly parallel but make no effort to balance the search across
heterogeneous hardware or to prioritize searching high-impact spaces. In this
paper, we introduce a framework for massively Scalable Hardware-Aware
Distributed Hyperparameter Optimization (SHADHO). Our framework calculates the
relative complexity of each search space and monitors performance on the
learning task over all trials. These metrics are then used as heuristics to
assign hyperparameters to distributed workers based on their hardware. We first
demonstrate that our framework achieves double the throughput of a standard
distributed hyperparameter optimization framework by optimizing SVM for MNIST
using 150 distributed workers. We then conduct model search with SHADHO over
the course of one week using 74 GPUs across two compute clusters to optimize
U-Net for a cell segmentation task, discovering 515 models that achieve a lower
validation loss than standard U-Net.Comment: 10 pages, 6 figure
- …