13,011 research outputs found

    Design of Hybrid Regrouping PSO-GA based Sub-optimal Networked Control System with Random Packet Losses

    Full text link
    In this paper, a new approach has been presented to design sub-optimal state feedback regulators over Networked Control Systems (NCS) with random packet losses. The optimal regulator gains, producing guaranteed stability are designed with the nominal discrete time model of a plant using Lyapunov technique which produces a few set of Bilinear Matrix Inequalities (BMIs). In order to reduce the computational complexity of the BMIs, a Genetic Algorithm (GA) based approach coupled with the standard interior point methods for LMIs has been adopted. A Regrouping Particle Swarm Optimization (RegPSO) based method is then employed to optimally choose the weighting matrices for the state feedback regulator design that gets passed through the GA based stability checking criteria i.e. the BMIs. This hybrid optimization methodology put forward in this paper not only reduces the computational difficulty of the feasibility checking condition for optimum stabilizing gain selection but also minimizes other time domain performance criteria like expected value of the set-point tracking error with optimum weight selection based LQR design for the nominal system.Comment: 27 pages, 7 figure

    A Parameterized Complexity Analysis of Bi-level Optimisation with Evolutionary Algorithms

    Full text link
    Bi-level optimisation problems have gained increasing interest in the field of combinatorial optimisation in recent years. With this paper, we start the runtime analysis of evolutionary algorithms for bi-level optimisation problems. We examine two NP-hard problems, the generalised minimum spanning tree problem (GMST), and the generalised travelling salesman problem (GTSP) in the context of parameterised complexity. For the generalised minimum spanning tree problem, we analyse the two approaches presented by Hu and Raidl (2012) with respect to the number of clusters that distinguish each other by the chosen representation of possible solutions. Our results show that a (1+1) EA working with the spanning nodes representation is not a fixed-parameter evolutionary algorithm for the problem, whereas the global structure representation enables to solve the problem in fixed-parameter time. We present hard instances for each approach and show that the two approaches are highly complementary by proving that they solve each other's hard instances very efficiently. For the generalised travelling salesman problem, we analyse the problem with respect to the number of clusters in the problem instance. Our results show that a (1+1) EA working with the global structure representation is a fixed-parameter evolutionary algorithm for the problem

    Wireless MIMO Switching with Zero-forcing Relaying and Network-coded Relaying

    Full text link
    A wireless relay with multiple antennas is called a multiple-input-multiple-output (MIMO) switch if it maps its input links to its output links using "precode-and-forward." Namely, the MIMO switch precodes the received signal vector in the uplink using some matrix for transmission in the downlink. This paper studies the scenario of KK stations and a MIMO switch, which has full channel state information. The precoder at the MIMO switch is either a zero-forcing matrix or a network-coded matrix. With the zero-forcing precoder, each destination station receives only its desired signal with enhanced noise but no interference. With the network-coded precoder, each station receives not only its desired signal and noise, but possibly also self-interference, which can be perfectly canceled. Precoder design for optimizing the received signal-to-noise ratios at the destinations is investigated. For zero-forcing relaying, the problem is solved in closed form in the two-user case, whereas in the case of more users, efficient algorithms are proposed and shown to be close to what can be achieved by extensive random search. For network-coded relaying, we present efficient iterative algorithms that can boost the throughput further.Comment: This version is to appear in IEEE Journal on Selected Areas in Communications later in 201

    Better Algorithms for Hybrid Circuit and Packet Switching in Data Centers

    Full text link
    Hybrid circuit and packet switching for data center networking (DCN) has received considerable research attention recently. A hybrid-switched DCN employs a much faster circuit switch that is reconfigurable with a nontrivial cost, and a much slower packet switch that is reconfigurable with no cost, to interconnect its racks of servers. The research problem is, given a traffic demand matrix (between the racks), how to compute a good circuit switch configuration schedule so that the vast majority of the traffic demand is removed by the circuit switch, leaving a remaining demand matrix that contains only small elements for the packet switch to handle. In this paper, we propose two new hybrid switch scheduling algorithms under two different scheduling constraints. Our first algorithm, called 2-hop Eclipse, strikes a much better tradeoff between the resulting performance (of the hybrid switch) and the computational complexity (of the algorithm) than the state of the art solution Eclipse/Eclipse++. Our second algorithm, called BFF (best first fit), is the first hybrid switching solution that exploits the potential partial reconfiguration capability of the circuit switch for performance gains

    Household Electricity Consumption Data Cleansing

    Full text link
    Load curve data in power systems refers to users' electrical energy consumption data periodically collected with meters. It has become one of the most important assets for modern power systems. Many operational decisions are made based on the information discovered in the data. Load curve data, however, usually suffers from corruptions caused by various factors, such as data transmission errors or malfunctioning meters. To solve the problem, tremendous research efforts have been made on load curve data cleansing. Most existing approaches apply outlier detection methods from the supply side (i.e., electricity service providers), which may only have aggregated load data. In this paper, we propose to seek aid from the demand side (i.e., electricity service users). With the help of readily available knowledge on consumers' appliances, we present a new appliance-driven approach to load curve data cleansing. This approach utilizes data generation rules and a Sequential Local Optimization Algorithm (SLOA) to solve the Corrupted Data Identification Problem (CDIP). We evaluate the performance of SLOA with real-world trace data and synthetic data. The results indicate that, comparing to existing load data cleansing methods, such as B-spline smoothing, our approach has an overall better performance and can effectively identify consecutive corrupted data. Experimental results also demonstrate that our method is robust in various tests. Our method provides a highly feasible and reliable solution to an emerging industry application.Comment: 12 pages, 12 figures; update: modified title and introduction, and corrected some typo

    On the Problem of Optimal Path Encoding for Software-Defined Networks

    Full text link
    Packet networks need to maintain state in the form of forwarding tables at each switch. The cost of this state increases as networks support ever more sophisticated per-flow routing, traffic engineering, and service chaining. Per-flow or per-path state at the switches can be eliminated by encoding each packet's desired path in its header. A key component of such a method is an efficient encoding of paths through the network. We introduce a mathematical formulation of this optimal path-encoding problem. We prove that the problem is APX-hard, by showing that approximating it to within a factor less than 8/7 is NP-hard. Thus, at best we can hope for a constant-factor approximation algorithm. We then present such an algorithm, approximating the optimal path-encoding problem to within a factor 2. Finally, we provide empirical results illustrating the effectiveness of the proposed algorithm.Comment: To appear in IEEE/ACM Transactions on Networkin

    Energy efficient D2D communications in dynamic TDD systems

    Full text link
    Network-assisted device-to-device communication is a promising technology for improving the performance of proximity-based services. This paper demonstrates how the integration of device-to-device communications and dynamic time-division duplex can improve the energy efficiency of future cellular networks, leading to a greener system operation and a prolonged battery lifetime of mobile devices. We jointly optimize the mode selection, transmission period and power allocation to minimize the energy consumption (from both a system and a device perspective) while satisfying a certain rate requirement. The radio resource management problems are formulated as mixed-integer nonlinear programming problems. Although they are known to be NP-hard in general, we exploit the problem structure to design efficient algorithms that optimally solve several problem cases. For the remaining cases, a heuristic algorithm that computes near-optimal solutions while respecting practical constraints on execution times and signaling overhead is also proposed. Simulation results confirm that the combination of device-to-device and flexible time-division-duplex technologies can significantly enhance spectrum and energy-efficiency of next generation cellular systems.Comment: Submitted to IEEE Journal of Selected Areas in Communication

    Experimental Design for Cost-Aware Learning of Causal Graphs

    Full text link
    We consider the minimum cost intervention design problem: Given the essential graph of a causal graph and a cost to intervene on a variable, identify the set of interventions with minimum total cost that can learn any causal graph with the given essential graph. We first show that this problem is NP-hard. We then prove that we can achieve a constant factor approximation to this problem with a greedy algorithm. We then constrain the sparsity of each intervention. We develop an algorithm that returns an intervention design that is nearly optimal in terms of size for sparse graphs with sparse interventions and we discuss how to use it when there are costs on the vertices.Comment: In NIPS 201

    Power-Aware Virtual Network Function Placement and Routing using an Abstraction Technique

    Full text link
    The Network Function Virtualization (NFV) is very promising for efficient provisioning of network services and is attracting a lot of attention. NFV can be implemented in commercial off-the-shelf servers or Physical Machines (PMs), and many network services can be offered as a sequence of Virtual Network Functions (VNFs), known as VNF chains. Furthermore, many existing network devices (e.g., switches) and collocated PMs are underutilized or over-provisioned, resulting in low power-efficiency. In order to achieve more energy efficient systems, this work aims at designing the placement of VNFs such that the total power consumption in network nodes and PMs is minimized, while meeting the delay and capacity requirements of the foreseen demands. Based on existing switch and PM power models, we propose a Integer Linear Programming (ILP) formulation to find the optimal solution. We also propose a heuristic based on the concept of Blocking Islands (BI), and a baseline heuristic based on the Betweenness Centrality (BC) property of the graph. Both heuristics and the ILP solutions have been compared in terms of total power consumption, delay, demands acceptance rate, and computation time. Our simulation results suggest that BI-based heuristic is superior compared with the BC-based heuristic, and very close to the optimal solution obtained from the ILP in terms of total power consumption and demands acceptance rate. Compared to the ILP, the proposed BI-based heuristic is significantly faster and results in 22% lower end-to-end delay, with a penalty of consuming 6% more power in average.Comment: IEEE Global Communications Conference (GLOBECOM) 2018, Abu Dhabi, UA

    How Hard Is Bribery in Elections?

    Full text link
    We study the complexity of influencing elections through bribery: How computationally complex is it for an external actor to determine whether by a certain amount of bribing voters a specified candidate can be made the election's winner? We study this problem for election systems as varied as scoring protocols and Dodgson voting, and in a variety of settings regarding homogeneous-vs.-nonhomogeneous electorate bribability, bounded-size-vs.-arbitrary-sized candidate sets, weighted-vs.-unweighted voters, and succinct-vs.-nonsuccinct input specification. We obtain both polynomial-time bribery algorithms and proofs of the intractability of bribery, and indeed our results show that the complexity of bribery is extremely sensitive to the setting. For example, we find settings in which bribery is NP-complete but manipulation (by voters) is in P, and we find settings in which bribing weighted voters is NP-complete but bribing voters with individual bribe thresholds is in P. For the broad class of elections (including plurality, Borda, k-approval, and veto) known as scoring protocols, we prove a dichotomy result for bribery of weighted voters: We find a simple-to-evaluate condition that classifies every case as either NP-complete or in P.Comment: Earlier version appears in Proc. of AAAI-06, pp. 641-646, 200
    • …
    corecore