3,062,943 research outputs found

    Heating the Solar Atmosphere by the Self-Enhanced Thermal Waves Caused by the Dynamo Processes

    Full text link
    We discuss a possible mechanism for heating the solar atmosphere by the ensemble of thermal waves, generated by the photospheric dynamo and propagating upwards with increasing magnitudes. These waves are self-sustained and amplified due to the specific dependence of the efficiency of heat release by Ohmic dissipation on the ratio of the collisional to gyro- frequencies, which in its turn is determined by the temperature profile formed in the wave. In the case of sufficiently strong driving, such a mechanism can increase the plasma temperature by a few times, i.e. it may be responsible for heating the chromosphere and the base of the transition region.Comment: v2: A number of minor corrections and additional explanations. AASTeX, 5 pages, 2 EPS figures, submitted to The Astrophysical Journa

    Improved Distributed Algorithms for Exact Shortest Paths

    Full text link
    Computing shortest paths is one of the central problems in the theory of distributed computing. For the last few years, substantial progress has been made on the approximate single source shortest paths problem, culminating in an algorithm of Becker et al. [DISC'17] which deterministically computes (1+o(1))(1+o(1))-approximate shortest paths in O~(D+n)\tilde O(D+\sqrt n) time, where DD is the hop-diameter of the graph. Up to logarithmic factors, this time complexity is optimal, matching the lower bound of Elkin [STOC'04]. The question of exact shortest paths however saw no algorithmic progress for decades, until the recent breakthrough of Elkin [STOC'17], which established a sublinear-time algorithm for exact single source shortest paths on undirected graphs. Shortly after, Huang et al. [FOCS'17] provided improved algorithms for exact all pairs shortest paths problem on directed graphs. In this paper, we present a new single-source shortest path algorithm with complexity O~(n3/4D1/4)\tilde O(n^{3/4}D^{1/4}). For polylogarithmic DD, this improves on Elkin's O~(n5/6)\tilde{O}(n^{5/6}) bound and gets closer to the Ω~(n1/2)\tilde{\Omega}(n^{1/2}) lower bound of Elkin [STOC'04]. For larger values of DD, we present an improved variant of our algorithm which achieves complexity O~(n3/4+o(1)+min{n3/4D1/6,n6/7}+D)\tilde{O}\left( n^{3/4+o(1)}+ \min\{ n^{3/4}D^{1/6},n^{6/7}\}+D\right), and thus compares favorably with Elkin's bound of O~(n5/6+n2/3D1/3+D)\tilde{O}(n^{5/6} + n^{2/3}D^{1/3} + D ) in essentially the entire range of parameters. This algorithm provides also a qualitative improvement, because it works for the more challenging case of directed graphs (i.e., graphs where the two directions of an edge can have different weights), constituting the first sublinear-time algorithm for directed graphs. Our algorithm also extends to the case of exact κ\kappa-source shortest paths...Comment: 26 page

    Model of Multi-branch Trees for Efficient Resource Allocation

    Get PDF
    Although exploring the principles of resource allocation is still important in many fields, little is known about appropriate methods for optimal resource allocation thus far. This is because we should consider many issues including opposing interests between many types of stakeholders. Here, we develop a new allocation method to resolve budget conflicts. To do so, we consider two points—minimizing assessment costs and satisfying allocational efficiency. In our method, an evaluator's assessment is restricted to one's own projects in one's own department, and both an executive's and mid-level executives' assessments are also restricted to each representative project in each branch or department they manage. At the same time, we develop a calculation method to integrate such assessments by using a multi-branch tree structure, where a set of leaf nodes represents projects and a set of non-leaf nodes represents either directors or executives. Our method is incentive-compatible because no director has any incentive to make fallacious assessments

    Algorithms and Bounds for Very Strong Rainbow Coloring

    Full text link
    A well-studied coloring problem is to assign colors to the edges of a graph GG so that, for every pair of vertices, all edges of at least one shortest path between them receive different colors. The minimum number of colors necessary in such a coloring is the strong rainbow connection number (\src(G)) of the graph. When proving upper bounds on \src(G), it is natural to prove that a coloring exists where, for \emph{every} shortest path between every pair of vertices in the graph, all edges of the path receive different colors. Therefore, we introduce and formally define this more restricted edge coloring number, which we call \emph{very strong rainbow connection number} (\vsrc(G)). In this paper, we give upper bounds on \vsrc(G) for several graph classes, some of which are tight. These immediately imply new upper bounds on \src(G) for these classes, showing that the study of \vsrc(G) enables meaningful progress on bounding \src(G). Then we study the complexity of the problem to compute \vsrc(G), particularly for graphs of bounded treewidth, and show this is an interesting problem in its own right. We prove that \vsrc(G) can be computed in polynomial time on cactus graphs; in contrast, this question is still open for \src(G). We also observe that deciding whether \vsrc(G) = k is fixed-parameter tractable in kk and the treewidth of GG. Finally, on general graphs, we prove that there is no polynomial-time algorithm to decide whether \vsrc(G) \leq 3 nor to approximate \vsrc(G) within a factor n1εn^{1-\varepsilon}, unless P==NP

    Algorithms, Automation, and News

    Get PDF
    This special issue examines the growing importance of algorithms and automation in the gathering, composition, and distribution of news. It connects a long line of research on journalism and computation with scholarly and professional terrain yet to be explored. Taken as a whole, these articles share some of the noble ambitions of the pioneering publications on ‘reporting algorithms’, such as a desire to see computing help journalists in their watchdog role by holding power to account. However, they also go further, firstly by addressing the fuller range of technologies that computational journalism now consists of: from chatbots and recommender systems, to artificial intelligence and atomised journalism. Secondly, they advance the literature by demonstrating the increased variety of uses for these technologies, including engaging underserved audiences, selling subscriptions, and recombining and re-using content. Thirdly, they problematize computational journalism by, for example, pointing out some of the challenges inherent in applying AI to investigative journalism and in trying to preserve public service values. Fourthly, they offer suggestions for future research and practice, including by presenting a framework for developing democratic news recommenders and another that may help us think about computational journalism in a more integrated, structured manner

    Random local algorithms

    Full text link
    Consider the problem when we want to construct some structure on a bounded degree graph, e.g. an almost maximum matching, and we want to decide about each edge depending only on its constant radius neighbourhood. We show that the information about the local statistics of the graph does not help here. Namely, if there exists a random local algorithm which can use any local statistics about the graph, and produces an almost optimal structure, then the same can be achieved by a random local algorithm using no statistics.Comment: 9 page

    Fast Genetic Algorithms

    Full text link
    For genetic algorithms using a bit-string representation of length~nn, the general recommendation is to take 1/n1/n as mutation rate. In this work, we discuss whether this is really justified for multimodal functions. Taking jump functions and the (1+1)(1+1) evolutionary algorithm as the simplest example, we observe that larger mutation rates give significantly better runtimes. For the \jump_{m,n} function, any mutation rate between 2/n2/n and m/nm/n leads to a speed-up at least exponential in mm compared to the standard choice. The asymptotically best runtime, obtained from using the mutation rate m/nm/n and leading to a speed-up super-exponential in mm, is very sensitive to small changes of the mutation rate. Any deviation by a small (1 \pm \eps) factor leads to a slow-down exponential in mm. Consequently, any fixed mutation rate gives strongly sub-optimal results for most jump functions. Building on this observation, we propose to use a random mutation rate α/n\alpha/n, where α\alpha is chosen from a power-law distribution. We prove that the (1+1)(1+1) EA with this heavy-tailed mutation rate optimizes any \jump_{m,n} function in a time that is only a small polynomial (in~mm) factor above the one stemming from the optimal rate for this mm. Our heavy-tailed mutation operator yields similar speed-ups (over the best known performance guarantees) for the vertex cover problem in bipartite graphs and the matching problem in general graphs. Following the example of fast simulated annealing, fast evolution strategies, and fast evolutionary programming, we propose to call genetic algorithms using a heavy-tailed mutation operator \emph{fast genetic algorithms}

    Algorithms and Speech

    Get PDF
    One of the central questions in free speech jurisprudence is what activities the First Amendment encompasses. This Article considers that question in the context of an area of increasing importance – algorithm-based decisions. I begin by looking to broadly accepted legal sources, which for the First Amendment means primarily Supreme Court jurisprudence. That jurisprudence provides for very broad First Amendment coverage, and the Court has reinforced that breadth in recent cases. Under the Court’s jurisprudence the First Amendment (and the heightened scrutiny it entails) would apply to many algorithm-based decisions, specifically those entailing substantive communications. We could of course adopt a limiting conception of the First Amendment, but any nonarbitrary exclusion of algorithm-based decisions would require major changes in the Court’s jurisprudence. I believe that First Amendment coverage of algorithm-based decisions is too small a step to justify such changes. But insofar as we are concerned about the expansiveness of First Amendment coverage, we may want to limit it in two areas of genuine uncertainty: editorial decisions that are neither obvious nor communicated to the reader, and laws that single out speakers but do not regulate their speech. Even with those limitations, however, an enormous and growing amount of activity will be subject to heightened scrutiny absent a fundamental reorientation of First Amendment jurisprudence
    corecore