58 research outputs found

    Resilient Submodular Maximization For Control And Sensing

    Get PDF
    Fundamental applications in control, sensing, and robotics, motivate the design of systems by selecting system elements, such as actuators or sensors, subject to constraints that require the elements not only to be a few in number, but also, to satisfy heterogeneity or interdependency constraints (called matroid constraints). For example, consider the scenarios: - (Control) Actuator placement: In a power grid, how should we place a few generators both to guarantee its stabilization with minimal control effort, and to satisfy interdependency constraints where the power grid must be controllable from the generators? - (Sensing) Sensor placement: In medical brain-wearable devices, how should we place a few sensors to ensure smoothing estimation capabilities? - (Robotics) Sensor scheduling: At a team of mobile robots, which few on-board sensors should we activate at each robot ---subject to heterogeneity constraints on the number of sensors that each robot can activate at each time--- so both to maximize the robots\u27 battery life, and to ensure the robots\u27 capability to complete a formation control task? In the first part of this thesis we motivate the above design problems, and propose the first algorithms to address them. In particular, although traditional approaches to matroid-constrained maximization have met great success in machine learning and facility location, they are unable to meet the aforementioned problem of actuator placement. In addition, although traditional approaches to sensor selection enable Kalman filtering capabilities, they do not enable smoothing or formation control capabilities, as required in the above problems of sensor placement and scheduling. Therefore, in the first part of the thesis we provide the first algorithms, and prove they achieve the following characteristics: provable approximation performance: the algorithms guarantee a solution close to the optimal; minimal running time: the algorithms terminate with the same running time as state-of-the-art algorithms for matroid-constrained maximization; adaptiveness: where applicable, at each time step the algorithms select system elements based on both the history of selections. We achieve the above ends by taking advantage of a submodular structure of in all aforementioned problems ---submodularity is a diminishing property for set functions, parallel to convexity for continuous functions. But in failure-prone and adversarial environments, sensors and actuators can fail; sensors and actuators can get attacked. Thence, the traditional design paradigms over matroid-constraints become insufficient, and in contrast, resilient designs against attacks or failures become important. However, no approximation algorithms are known for their solution; relevantly, the problem of resilient maximization over matroid constraints is NP-hard. In the second part of this thesis we motivate the general problem of resilient maximization over matroid constraints, and propose the first algorithms to address it, to protect that way any design over matroid constraints, not only within the boundaries of control, sensing, and robotics, but also within machine learning, facility location, and matroid-constrained optimization in general. In particular, in the second part of this thesis we provide the first algorithms, and prove they achieve the following characteristics: resiliency: the algorithms are valid for any number of attacks or failures; adaptiveness: where applicable, at each time step the algorithms select system elements based on both the history of selections, and on the history of attacks or failures; provable approximation guarantees: the algorithms guarantee for any submodular or merely monotone function a solution close to the optimal; minimal running time: the algorithms terminate with the same running time as state-of-the-art algorithms for matroid-constrained maximization. We bound the performance of our algorithms by using notions of curvature for monotone (not necessarily submodular) set functions, which are established in the literature of submodular maximization. In the third and final part of this thesis we apply our tools for resilient maximization in robotics, and in particular, to the problem of active information gathering with mobile robots. This problem calls for the motion-design of a team of mobile robots so to enable the effective information gathering about a process of interest, to support, e.g., critical missions such as hazardous environmental monitoring, and search and rescue. Therefore, in the third part of this thesis we aim to protect such multi-robot information gathering tasks against attacks or failures that can result to the withdrawal of robots from the task. We conduct both numerical and hardware experiments in multi-robot multi-target tracking scenarios, and exemplify the benefits, as well as, the performance of our approach

    Resilient Monotone Submodular Function Maximization

    Full text link
    In this paper, we focus on applications in machine learning, optimization, and control that call for the resilient selection of a few elements, e.g. features, sensors, or leaders, against a number of adversarial denial-of-service attacks or failures. In general, such resilient optimization problems are hard, and cannot be solved exactly in polynomial time, even though they often involve objective functions that are monotone and submodular. Notwithstanding, in this paper we provide the first scalable, curvature-dependent algorithm for their approximate solution, that is valid for any number of attacks or failures, and which, for functions with low curvature, guarantees superior approximation performance. Notably, the curvature has been known to tighten approximations for several non-resilient maximization problems, yet its effect on resilient maximization had hitherto been unknown. We complement our theoretical analyses with supporting empirical evaluations.Comment: Improved suboptimality guarantees on proposed algorithm and corrected typo on Algorithm 1's statemen

    Submodularity in Action: From Machine Learning to Signal Processing Applications

    Full text link
    Submodularity is a discrete domain functional property that can be interpreted as mimicking the role of the well-known convexity/concavity properties in the continuous domain. Submodular functions exhibit strong structure that lead to efficient optimization algorithms with provable near-optimality guarantees. These characteristics, namely, efficiency and provable performance bounds, are of particular interest for signal processing (SP) and machine learning (ML) practitioners as a variety of discrete optimization problems are encountered in a wide range of applications. Conventionally, two general approaches exist to solve discrete problems: (i)(i) relaxation into the continuous domain to obtain an approximate solution, or (ii)(ii) development of a tailored algorithm that applies directly in the discrete domain. In both approaches, worst-case performance guarantees are often hard to establish. Furthermore, they are often complex, thus not practical for large-scale problems. In this paper, we show how certain scenarios lend themselves to exploiting submodularity so as to construct scalable solutions with provable worst-case performance guarantees. We introduce a variety of submodular-friendly applications, and elucidate the relation of submodularity to convexity and concavity which enables efficient optimization. With a mixture of theory and practice, we present different flavors of submodularity accompanying illustrative real-world case studies from modern SP and ML. In all cases, optimization algorithms are presented, along with hints on how optimality guarantees can be established

    Stability and Recovery for Independence Systems

    Get PDF
    Two genres of heuristics that are frequently reported to perform much better on "real-world" instances than in the worst case are greedy algorithms and local search algorithms. In this paper, we systematically study these two types of algorithms for the problem of maximizing a monotone submodular set function subject to downward-closed feasibility constraints. We consider perturbation-stable instances, in the sense of Bilu and Linial [11], and precisely identify the stability threshold beyond which these algorithms are guaranteed to recover the optimal solution. Byproducts of our work include the first definition of perturbation-stability for non-additive objective functions, and a resolution of the worst-case approximation guarantee of local search in p-extendible systems
    • …
    corecore