129,938 research outputs found

    Flow-directed PCA for monitoring networks

    Get PDF
    Measurements recorded over monitoring networks often possess spatial and temporal correlation inducing redundancies in the information provided. For river water quality monitoring in particular, flow-connected sites may likely provide similar information. This paper proposes a novel approach to principal components analysis to investigate reducing dimensionality for spatiotemporal flow-connected network data in order to identify common spatiotemporal patterns. The method is illustrated using monthly observations of total oxidized nitrogen for the Trent catchment area in England. Common patterns are revealed that are hidden when the river network structure and temporal correlation are not accounted for. Such patterns provide valuable information for the design of future sampling strategies

    New Complexity Results and Algorithms for the Minimum Tollbooth Problem

    Full text link
    The inefficiency of the Wardrop equilibrium of nonatomic routing games can be eliminated by placing tolls on the edges of a network so that the socially optimal flow is induced as an equilibrium flow. A solution where the minimum number of edges are tolled may be preferable over others due to its ease of implementation in real networks. In this paper we consider the minimum tollbooth (MINTB) problem, which seeks social optimum inducing tolls with minimum support. We prove for single commodity networks with linear latencies that the problem is NP-hard to approximate within a factor of 1.13771.1377 through a reduction from the minimum vertex cover problem. Insights from network design motivate us to formulate a new variation of the problem where, in addition to placing tolls, it is allowed to remove unused edges by the social optimum. We prove that this new problem remains NP-hard even for single commodity networks with linear latencies, using a reduction from the partition problem. On the positive side, we give the first exact polynomial solution to the MINTB problem in an important class of graphs---series-parallel graphs. Our algorithm solves MINTB by first tabulating the candidate solutions for subgraphs of the series-parallel network and then combining them optimally

    Distributed Robust Set-Invariance for Interconnected Linear Systems

    Full text link
    We introduce a class of distributed control policies for networks of discrete-time linear systems with polytopic additive disturbances. The objective is to restrict the network-level state and controls to user-specified polyhedral sets for all times. This problem arises in many safety-critical applications. We consider two problems. First, given a communication graph characterizing the structure of the information flow in the network, we find the optimal distributed control policy by solving a single linear program. Second, we find the sparsest communication graph required for the existence of a distributed invariance-inducing control policy. Illustrative examples, including one on platooning, are presented.Comment: 8 Pages. Submitted to American Control Conference (ACC), 201

    Traffic jams and intermittent flows in microfluidic networks

    Full text link
    We investigate both experimentally and theoretically the traffic of particles flowing in microfluidic obstacle networks. We show that the traffic dynamics is a non-linear process: the particle current does not scale with the particle density even in the dilute limit where no particle collision occurs. We demonstrate that this non-linear behavior stems from long range hydrodynamic interactions. Importantly, we also establish that there exists a maximal current above which no stationary particle flow can be sustained. For higher current values, intermittent traffic jams form thereby inducing the ejection of the particles from the initial path and the subsequent invasion of the network. Eventually, we put our findings in the broader context of the transport proccesses of driven particles in low dimension

    Short-term plasticity as cause-effect hypothesis testing in distal reward learning

    Get PDF
    Asynchrony, overlaps and delays in sensory-motor signals introduce ambiguity as to which stimuli, actions, and rewards are causally related. Only the repetition of reward episodes helps distinguish true cause-effect relationships from coincidental occurrences. In the model proposed here, a novel plasticity rule employs short and long-term changes to evaluate hypotheses on cause-effect relationships. Transient weights represent hypotheses that are consolidated in long-term memory only when they consistently predict or cause future rewards. The main objective of the model is to preserve existing network topologies when learning with ambiguous information flows. Learning is also improved by biasing the exploration of the stimulus-response space towards actions that in the past occurred before rewards. The model indicates under which conditions beliefs can be consolidated in long-term memory, it suggests a solution to the plasticity-stability dilemma, and proposes an interpretation of the role of short-term plasticity.Comment: Biological Cybernetics, September 201

    Hamiltonian Monte Carlo Acceleration Using Surrogate Functions with Random Bases

    Full text link
    For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov Chain Monte Carlo (MCMC) methods, namely, Hamiltonian Monte Carlo (HMC). The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the art methods
    • …
    corecore