142,214 research outputs found

    Leveraging Physical Layer Capabilites: Distributed Scheduling in Interference Networks with Local Views

    Full text link
    In most wireless networks, nodes have only limited local information about the state of the network, which includes connectivity and channel state information. With limited local information about the network, each node's knowledge is mismatched; therefore, they must make distributed decisions. In this paper, we pose the following question - if every node has network state information only about a small neighborhood, how and when should nodes choose to transmit? While link scheduling answers the above question for point-to-point physical layers which are designed for an interference-avoidance paradigm, we look for answers in cases when interference can be embraced by advanced PHY layer design, as suggested by results in network information theory. To make progress on this challenging problem, we propose a constructive distributed algorithm that achieves rates higher than link scheduling based on interference avoidance, especially if each node knows more than one hop of network state information. We compare our new aggressive algorithm to a conservative algorithm we have presented in [1]. Both algorithms schedule sub-networks such that each sub-network can employ advanced interference-embracing coding schemes to achieve higher rates. Our innovation is in the identification, selection and scheduling of sub-networks, especially when sub-networks are larger than a single link.Comment: 14 pages, Submitted to IEEE/ACM Transactions on Networking, October 201

    Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints

    Full text link
    We investigate two new optimization problems -- minimizing a submodular function subject to a submodular lower bound constraint (submodular cover) and maximizing a submodular function subject to a submodular upper bound constraint (submodular knapsack). We are motivated by a number of real-world applications in machine learning including sensor placement and data subset selection, which require maximizing a certain submodular function (like coverage or diversity) while simultaneously minimizing another (like cooperative cost). These problems are often posed as minimizing the difference between submodular functions [14, 35] which is in the worst case inapproximable. We show, however, that by phrasing these problems as constrained optimization, which is more natural for many applications, we achieve a number of bounded approximation guarantees. We also show that both these problems are closely related and an approximation algorithm solving one can be used to obtain an approximation guarantee for the other. We provide hardness results for both problems thus showing that our approximation factors are tight up to log-factors. Finally, we empirically demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201

    Distributed Model Predictive Control Using a Chain of Tubes

    Full text link
    A new distributed MPC algorithm for the regulation of dynamically coupled subsystems is presented in this paper. The current control action is computed via two robust controllers working in a nested fashion. The inner controller builds a nominal reference trajectory from a decentralized perspective. The outer controller uses this information to take into account the effects of the coupling and generate a distributed control action. The tube-based approach to robustness is employed. A supplementary constraint is included in the outer optimization problem to provide recursive feasibility of the overall controllerComment: Accepted for presentation at the UKACC CONTROL 2016 conference (Belfast, UK

    Belief Tree Search for Active Object Recognition

    Full text link
    Active Object Recognition (AOR) has been approached as an unsupervised learning problem, in which optimal trajectories for object inspection are not known and are to be discovered by reducing label uncertainty measures or training with reinforcement learning. Such approaches have no guarantees of the quality of their solution. In this paper, we treat AOR as a Partially Observable Markov Decision Process (POMDP) and find near-optimal policies on training data using Belief Tree Search (BTS) on the corresponding belief Markov Decision Process (MDP). AOR then reduces to the problem of knowledge transfer from near-optimal policies on training set to the test set. We train a Long Short Term Memory (LSTM) network to predict the best next action on the training set rollouts. We sho that the proposed AOR method generalizes well to novel views of familiar objects and also to novel objects. We compare this supervised scheme against guided policy search, and find that the LSTM network reaches higher recognition accuracy compared to the guided policy method. We further look into optimizing the observation function to increase the total collected reward of optimal policy. In AOR, the observation function is known only approximately. We propose a gradient-based method update to this approximate observation function to increase the total reward of any policy. We show that by optimizing the observation function and retraining the supervised LSTM network, the AOR performance on the test set improves significantly.Comment: IROS 201
    • …
    corecore