5,003 research outputs found

    Dynamic traffic assignment: model classifications and recent advances in travel choice principles

    Get PDF
    Dynamic Traffic Assignment (DTA) has been studied for more than four decades and numerous reviews of this research area have been conducted. This review focuses on the travel choice principle and the classification of DTA models, and is supplementary to the existing reviews. The implications of the travel choice principle for the existence and uniqueness of DTA solutions are discussed, and the interrelation between the travel choice principle and the traffic flow component is explained using the nonlinear complementarity problem, the variational inequality problem, the mathematical programming problem, and the fixed point problem formulations. This paper also points out that all of the reviewed travel choice principles are extended from those used in static traffic assignment. There are also many classifications of DTA models, in which each classification addresses one aspect of DTA modeling. Finally, some future research directions are identified.postprin

    Physical Layer Service Integration in 5G: Potentials and Challenges

    Full text link
    High transmission rate and secure communication have been identified as the key targets that need to be effectively addressed by fifth generation (5G) wireless systems. In this context, the concept of physical-layer security becomes attractive, as it can establish perfect security using only the characteristics of wireless medium. Nonetheless, to further increase the spectral efficiency, an emerging concept, termed physical-layer service integration (PHY-SI), has been recognized as an effective means. Its basic idea is to combine multiple coexisting services, i.e., multicast/broadcast service and confidential service, into one integral service for one-time transmission at the transmitter side. This article first provides a tutorial on typical PHY-SI models. Furthermore, we propose some state-of-the-art solutions to improve the overall performance of PHY-SI in certain important communication scenarios. In particular, we highlight the extension of several concepts borrowed from conventional single-service communications, such as artificial noise (AN), eigenmode transmission etc., to the scenario of PHY-SI. These techniques are shown to be effective in the design of reliable and robust PHY-SI schemes. Finally, several potential research directions are identified for future work.Comment: 12 pages, 7 figure

    Best matching processes in distributed systems

    Get PDF
    The growing complexity and dynamic behavior of modern manufacturing and service industries along with competitive and globalized markets have gradually transformed traditional centralized systems into distributed networks of e- (electronic) Systems. Emerging examples include e-Factories, virtual enterprises, smart farms, automated warehouses, and intelligent transportation systems. These (and similar) distributed systems, regardless of context and application, have a property in common: They all involve certain types of interactions (collaborative, competitive, or both) among their distributed individuals—from clusters of passive sensors and machines to complex networks of computers, intelligent robots, humans, and enterprises. Having this common property, such systems may encounter common challenges in terms of suboptimal interactions and thus poor performance, caused by potential mismatch between individuals. For example, mismatched subassembly parts, vehicles—routes, suppliers—retailers, employees—departments, and products—automated guided vehicles—storage locations may lead to low-quality products, congested roads, unstable supply networks, conflicts, and low service level, respectively. This research refers to this problem as best matching, and investigates it as a major design principle of CCT, the Collaborative Control Theory. The original contribution of this research is to elaborate on the fundamentals of best matching in distributed and collaborative systems, by providing general frameworks for (1) Systematic analysis, inclusive taxonomy, analogical and structural comparison between different matching processes; (2) Specification and formulation of problems, and development of algorithms and protocols for best matching; (3) Validation of the models, algorithms, and protocols through extensive numerical experiments and case studies. The first goal is addressed by investigating matching problems in distributed production, manufacturing, supply, and service systems based on a recently developed reference model, the PRISM Taxonomy of Best Matching. Following the second goal, the identified problems are then formulated as mixed-integer programs. Due to the computational complexity of matching problems, various optimization algorithms are developed for solving different problem instances, including modified genetic algorithms, tabu search, and neighbourhood search heuristics. The dynamic and collaborative/competitive behaviors of matching processes in distributed settings are also formulated and examined through various collaboration, best matching, and task administration protocols. In line with the third goal, four case studies are conducted on various manufacturing, supply, and service systems to highlight the impact of best matching on their operational performance, including service level, utilization, stability, and cost-effectiveness, and validate the computational merits of the developed solution methodologies

    Fair Robust Assignment Using Redundancy

    Get PDF
    We study the consideration of fairness in redundant assignment for multi-agent task allocation. It has recently been shown that redundant assignment of agents to tasks provides robustness to uncertainty in task performance. However, the question of how to fairly assign these redundant resources across tasks remains unaddressed. In this paper, we present a novel problem formulation for fair redundant task allocation, in which we cast it as the optimization of worst-case task costs. Solving this problem optimally is NP-hard. Therefore, we exploit properties of supermodularity to propose a polynomial-time, near-optimal solution. Our algorithm provides a solution set that is α times larger than the optimal set size in order to guarantee a solution cost at least as good as the optimal target cost. We derive the sub- optimality bound on this cardinality relaxation, α. Additionally, we demonstrate that our algorithm performs near-optimally without the cardinality relaxation. We show the algorithm in simulations of redundant assignments of robots to goal nodes on transport networks with uncertain travel times. Empirically, our algorithm outperforms benchmarks, scales to large problems, and provides improvements in both fairness and average utility.We gratefully acknowledge the support from ARL Grant DCIST CRA W911NF-17-2-0181, NSF Grant CNS-1521617, ARO Grant W911NF-13-1- 0350, ONR Grants N00014-20-1-2822 and ONR grant N00014-20-S-B001, and Qualcomm Research. The first author acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1845298

    Oracle-Based Robust Optimization via Online Learning

    Full text link
    Robust optimization is a common framework in optimization under uncertainty when the problem parameters are not known, but it is rather known that the parameters belong to some given uncertainty set. In the robust optimization framework the problem solved is a min-max problem where a solution is judged according to its performance on the worst possible realization of the parameters. In many cases, a straightforward solution of the robust optimization problem of a certain type requires solving an optimization problem of a more complicated type, and in some cases even NP-hard. For example, solving a robust conic quadratic program, such as those arising in robust SVM, ellipsoidal uncertainty leads in general to a semidefinite program. In this paper we develop a method for approximately solving a robust optimization problem using tools from online convex optimization, where in every stage a standard (non-robust) optimization program is solved. Our algorithms find an approximate robust solution using a number of calls to an oracle that solves the original (non-robust) problem that is inversely proportional to the square of the target accuracy
    corecore