40,405 research outputs found
Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort
Many production-grade algorithms benefit from combining an asymptotically
efficient algorithm for solving big problem instances, by splitting them into
smaller ones, and an asymptotically inefficient algorithm with a very small
implementation constant for solving small subproblems. A well-known example is
stable sorting, where mergesort is often combined with insertion sort to
achieve a constant but noticeable speed-up.
We apply this idea to non-dominated sorting. Namely, we combine the
divide-and-conquer algorithm, which has the currently best known asymptotic
runtime of , with the Best Order Sort algorithm, which
has the runtime of but demonstrates the best practical performance
out of quadratic algorithms.
Empirical evaluation shows that the hybrid's running time is typically not
worse than of both original algorithms, while for large numbers of points it
outperforms them by at least 20%. For smaller numbers of objectives, the
speedup can be as large as four times.Comment: A two-page abstract of this paper will appear in the proceedings
companion of the 2017 Genetic and Evolutionary Computation Conference (GECCO
2017
Workload Equity in Vehicle Routing Problems: A Survey and Analysis
Over the past two decades, equity aspects have been considered in a growing
number of models and methods for vehicle routing problems (VRPs). Equity
concerns most often relate to fairly allocating workloads and to balancing the
utilization of resources, and many practical applications have been reported in
the literature. However, there has been only limited discussion about how
workload equity should be modeled in VRPs, and various measures for optimizing
such objectives have been proposed and implemented without a critical
evaluation of their respective merits and consequences.
This article addresses this gap with an analysis of classical and alternative
equity functions for biobjective VRP models. In our survey, we review and
categorize the existing literature on equitable VRPs. In the analysis, we
identify a set of axiomatic properties that an ideal equity measure should
satisfy, collect six common measures, and point out important connections
between their properties and those of the resulting Pareto-optimal solutions.
To gauge the extent of these implications, we also conduct a numerical study on
small biobjective VRP instances solvable to optimality. Our study reveals two
undesirable consequences when optimizing equity with nonmonotonic functions:
Pareto-optimal solutions can consist of non-TSP-optimal tours, and even if all
tours are TSP optimal, Pareto-optimal solutions can be workload inconsistent,
i.e. composed of tours whose workloads are all equal to or longer than those of
other Pareto-optimal solutions. We show that the extent of these phenomena
should not be underestimated. The results of our biobjective analysis are valid
also for weighted sum, constraint-based, or single-objective models. Based on
this analysis, we conclude that monotonic equity functions are more appropriate
for certain types of VRP models, and suggest promising avenues for further
research.Comment: Accepted Manuscrip
NoCo: ILP-based worst-case contention estimation for mesh real-time manycores
Manycores are capable of providing the computational demands required by functionally-advanced critical applications in domains such as automotive and avionics. In manycores a network-on-chip (NoC) provides access to shared caches and memories and hence concentrates most of the contention that tasks suffer, with effects on the worst-case contention delay (WCD) of packets and tasks' WCET. While several proposals minimize the impact of individual NoC parameters on WCD, e.g. mapping and routing, there are strong dependences among these NoC parameters. Hence, finding the optimal NoC configurations requires optimizing all parameters simultaneously, which represents a multidimensional optimization problem. In this paper we propose NoCo, a novel approach that combines ILP and stochastic optimization to find NoC configurations in terms of packet routing, application mapping, and arbitration weight allocation. Our results show that NoCo improves other techniques that optimize a subset of NoC parameters.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness under grant TIN2015-
65316-P and the HiPEAC Network of Excellence. It also received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (agreement No. 772773). Carles Hernández
is jointly supported by the MINECO and FEDER funds
through grant TIN2014-60404-JIN. Jaume Abella has been
partially supported by the Spanish Ministry of Economy and
Competitiveness under Ramon y Cajal postdoctoral fellowship
number RYC-2013-14717. Enrico Mezzetti has been partially
supported by the Spanish Ministry of Economy and Competitiveness
under Juan de la Cierva-Incorporaci´on postdoctoral
fellowship number IJCI-2016-27396.Peer ReviewedPostprint (author's final draft
Hardware Impairments Aware Transceiver Design for Bidirectional Full-Duplex MIMO OFDM Systems
In this paper we address the linear precoding and decoding design problem for
a bidirectional orthogonal frequencydivision multiplexing (OFDM) communication
system, between two multiple-input multiple-output (MIMO) full-duplex (FD)
nodes. The effects of hardware distortion as well as the channel state
information error are taken into account. In the first step, we transform the
available time-domain characterization of the hardware distortions for FD MIMO
transceivers to the frequency domain, via a linear Fourier transformation. As a
result, the explicit impact of hardware inaccuracies on the residual
selfinterference (RSI) and inter-carrier leakage (ICL) is formulated in
relation to the intended transmit/received signals. Afterwards, linear
precoding and decoding designs are proposed to enhance the system performance
following the minimum-mean-squarederror (MMSE) and sum rate maximization
strategies, assuming the availability of perfect or erroneous CSI. The proposed
designs are based on the application of alternating optimization over the
system parameters, leading to a necessary convergence. Numerical results
indicate that the application of a distortionaware design is essential for a
system with a high hardware distortion, or for a system with a low thermal
noise variance.Comment: Submitted to IEEE for publicatio
A bi-level model of dynamic traffic signal control with continuum approximation
This paper proposes a bi-level model for traffic network signal control, which is formulated as a dynamic Stackelberg game and solved as a mathematical program with equilibrium constraints (MPEC). The lower-level problem is a dynamic user equilibrium (DUE) with embedded dynamic network loading (DNL) sub-problem based on the LWR model (Lighthill and Whitham, 1955; Richards, 1956). The upper-level decision variables are (time-varying) signal green splits with the objective of minimizing network-wide travel cost. Unlike most existing literature which mainly use an on-and-off (binary) representation of the signal controls, we employ a continuum signal model recently proposed and analyzed in Han et al. (2014), which aims at describing and predicting the aggregate behavior that exists at signalized intersections without relying on distinct signal phases. Advantages of this continuum signal model include fewer integer variables, less restrictive constraints on the time steps, and higher decision resolution. It simplifies the modeling representation of large-scale urban traffic networks with the benefit of improved computational efficiency in simulation or optimization. We present, for the LWR-based DNL model that explicitly captures vehicle spillback, an in-depth study on the implementation of the continuum signal model, as its approximation accuracy depends on a number of factors and may deteriorate greatly under certain conditions. The proposed MPEC is solved on two test networks with three metaheuristic methods. Parallel computing is employed to significantly accelerate the solution procedure
- …