7,520 research outputs found

    Route Swarm: Wireless Network Optimization through Mobility

    Full text link
    In this paper, we demonstrate a novel hybrid architecture for coordinating networked robots in sensing and information routing applications. The proposed INformation and Sensing driven PhysIcally REconfigurable robotic network (INSPIRE), consists of a Physical Control Plane (PCP) which commands agent position, and an Information Control Plane (ICP) which regulates information flow towards communication/sensing objectives. We describe an instantiation where a mobile robotic network is dynamically reconfigured to ensure high quality routes between static wireless nodes, which act as source/destination pairs for information flow. The ICP commands the robots towards evenly distributed inter-flow allocations, with intra-flow configurations that maximize route quality. The PCP then guides the robots via potential-based control to reconfigure according to ICP commands. This formulation, deemed Route Swarm, decouples information flow and physical control, generating a feedback between routing and sensing needs and robotic configuration. We demonstrate our propositions through simulation under a realistic wireless network regime.Comment: 9 pages, 4 figures, submitted to the IEEE International Conference on Intelligent Robots and Systems (IROS) 201

    Surprises in High-Dimensional Ridgeless Least Squares Interpolation

    Full text link
    Interpolators -- estimators that achieve zero training error -- have attracted growing attention in machine learning, mainly because state-of-the art neural networks appear to be models of this type. In this paper, we study minimum 2\ell_2 norm (``ridgeless'') interpolation in high-dimensional least squares regression. We consider two different models for the feature distribution: a linear model, where the feature vectors xiRpx_i \in {\mathbb R}^p are obtained by applying a linear transform to a vector of i.i.d.\ entries, xi=Σ1/2zix_i = \Sigma^{1/2} z_i (with ziRpz_i \in {\mathbb R}^p); and a nonlinear model, where the feature vectors are obtained by passing the input through a random one-layer neural network, xi=φ(Wzi)x_i = \varphi(W z_i) (with ziRdz_i \in {\mathbb R}^d, WRp×dW \in {\mathbb R}^{p \times d} a matrix of i.i.d.\ entries, and φ\varphi an activation function acting componentwise on WziW z_i). We recover -- in a precise quantitative way -- several phenomena that have been observed in large-scale neural networks and kernel machines, including the "double descent" behavior of the prediction risk, and the potential benefits of overparametrization.Comment: 68 pages; 16 figures. This revision contains non-asymptotic version of earlier results, and results for general coefficient

    Genomic Analysis of Drosophila Neuronal Remodeling: A Role for the RNA-Binding Protein Boule as a Negative Regulator of Axon Pruning

    Get PDF
    Drosophila mushroom body (MB) {gamma} neurons undergo axon pruning during metamorphosis through a process of localized degeneration of specific axon branches. Developmental axon degeneration is initiated by the steroid hormone ecdysone, acting through a nuclear receptor complex composed of USP (ultraspiracle) and EcRB1 (ecdysone receptor B1) to regulate gene expression in MB {gamma} neurons. To identify ecdysone-dependent gene expression changes in MB {gamma} neurons at the onset of axon pruning, we use laser capture microdissection to isolate wild-type and mutant MB neurons in which EcR (ecdysone receptor) activity is genetically blocked, and analyze expression changes by microarray. We identify several molecular pathways that are regulated in MB neurons by ecdysone. The most striking observation is the upregulation of genes involved in the UPS (ubiquitin–proteasome system), which is cell autonomously required for {gamma} neuron pruning. In addition, we characterize the function of Boule, an evolutionarily conserved RNA-binding protein previously implicated in spermatogenesis in flies and vertebrates. boule expression is downregulated by ecdysone in MB neurons at the onset of pruning, and forced expression of Boule in MB {gamma} neurons is sufficient to inhibit axon pruning. This activity is dependent on the RNA-binding domain of Boule and a conserved DAZ (deleted in azoospermia) domain implicated in interactions with other RNA-binding proteins. However, loss of Boule does not result in obvious defects in axon pruning or morphogenesis of MB neurons, suggesting that it acts redundantly with other ecdyonse-regulated genes. We propose a novel function for Boule in the CNS as a negative regulator of developmental axon pruning

    Deterministic Time-Space Tradeoffs for k-SUM

    Get PDF
    Given a set of numbers, the kk-SUM problem asks for a subset of kk numbers that sums to zero. When the numbers are integers, the time and space complexity of kk-SUM is generally studied in the word-RAM model; when the numbers are reals, the complexity is studied in the real-RAM model, and space is measured by the number of reals held in memory at any point. We present a time and space efficient deterministic self-reduction for the kk-SUM problem which holds for both models, and has many interesting consequences. To illustrate: * 33-SUM is in deterministic time O(n2lglg(n)/lg(n))O(n^2 \lg\lg(n)/\lg(n)) and space O(nlg(n)lglg(n))O\left(\sqrt{\frac{n \lg(n)}{\lg\lg(n)}}\right). In general, any polylogarithmic-time improvement over quadratic time for 33-SUM can be converted into an algorithm with an identical time improvement but low space complexity as well. * 33-SUM is in deterministic time O(n2)O(n^2) and space O(n)O(\sqrt n), derandomizing an algorithm of Wang. * A popular conjecture states that 3-SUM requires n2o(1)n^{2-o(1)} time on the word-RAM. We show that the 3-SUM Conjecture is in fact equivalent to the (seemingly weaker) conjecture that every O(n.51)O(n^{.51})-space algorithm for 33-SUM requires at least n2o(1)n^{2-o(1)} time on the word-RAM. * For k4k \ge 4, kk-SUM is in deterministic O(nk2+2/k)O(n^{k - 2 + 2/k}) time and O(n)O(\sqrt{n}) space

    Digital zero noise extrapolation for quantum error mitigation

    Full text link
    Zero-noise extrapolation (ZNE) is an increasingly popular technique for mitigating errors in noisy quantum computations without using additional quantum resources. We review the fundamentals of ZNE and propose several improvements to noise scaling and extrapolation, the two key components in the technique. We introduce unitary folding and parameterized noise scaling. These are digital noise scaling frameworks, i.e. one can apply them using only gate-level access common to most quantum instruction sets. We also study different extrapolation methods, including a new adaptive protocol that uses a statistical inference framework. Benchmarks of our techniques show error reductions of 18X to 24X over non-mitigated circuits and demonstrate ZNE effectiveness at larger qubit numbers than have been tested previously. In addition to presenting new results, this work is a self-contained introduction to the practical use of ZNE by quantum programmers.Comment: 11 pages, 7 figure

    COVID-19 is increasing the power of Brazil’s criminal groups

    Get PDF
    Data from various states suggest that COVID-19 lockdowns have done little to reduce the use of violence by criminal groups in Brazil. What has changed is governance, with criminal actors adapting to coronavirus by imposing curfews, restricting movement, promoting public-health messages, and discouraging price gouging – alongside their usual practices of extortion and drug trafficking. Such changes in violence and governance indicate that Brazil’s non-state armed groups continue to augment their power, and these gains may well persist once the pandemic has receded, write Ryan Berg (American Enterprise Institute) and Andrea Varsori (Urban Violence Research Network)
    corecore