2,238,534 research outputs found

### Heating the Solar Atmosphere by the Self-Enhanced Thermal Waves Caused by the Dynamo Processes

We discuss a possible mechanism for heating the solar atmosphere by the
ensemble of thermal waves, generated by the photospheric dynamo and propagating
upwards with increasing magnitudes. These waves are self-sustained and
amplified due to the specific dependence of the efficiency of heat release by
Ohmic dissipation on the ratio of the collisional to gyro- frequencies, which
in its turn is determined by the temperature profile formed in the wave. In the
case of sufficiently strong driving, such a mechanism can increase the plasma
temperature by a few times, i.e. it may be responsible for heating the
chromosphere and the base of the transition region.Comment: v2: A number of minor corrections and additional explanations.
AASTeX, 5 pages, 2 EPS figures, submitted to The Astrophysical Journa

### Model of Multi-branch Trees for Efficient Resource Allocation

Although exploring the principles of resource allocation is still important in many fields, little is known about appropriate methods for optimal resource allocation thus far. This is because we should consider many issues including opposing interests between many types of stakeholders. Here, we develop a new allocation method to resolve budget conflicts. To do so, we consider two pointsâminimizing assessment costs and satisfying allocational efficiency. In our method, an evaluator's assessment is restricted to one's own projects in one's own department, and both an executive's and mid-level executives' assessments are also restricted to each representative project in each branch or department they manage. At the same time, we develop a calculation method to integrate such assessments by using a multi-branch tree structure, where a set of leaf nodes represents projects and a set of non-leaf nodes represents either directors or executives. Our method is incentive-compatible because no director has any incentive to make fallacious assessments

### Algorithms and Bounds for Very Strong Rainbow Coloring

A well-studied coloring problem is to assign colors to the edges of a graph
$G$ so that, for every pair of vertices, all edges of at least one shortest
path between them receive different colors. The minimum number of colors
necessary in such a coloring is the strong rainbow connection number
(\src(G)) of the graph. When proving upper bounds on \src(G), it is natural
to prove that a coloring exists where, for \emph{every} shortest path between
every pair of vertices in the graph, all edges of the path receive different
colors. Therefore, we introduce and formally define this more restricted edge
coloring number, which we call \emph{very strong rainbow connection number}
(\vsrc(G)).
In this paper, we give upper bounds on \vsrc(G) for several graph classes,
some of which are tight. These immediately imply new upper bounds on \src(G)
for these classes, showing that the study of \vsrc(G) enables meaningful
progress on bounding \src(G). Then we study the complexity of the problem to
compute \vsrc(G), particularly for graphs of bounded treewidth, and show this
is an interesting problem in its own right. We prove that \vsrc(G) can be
computed in polynomial time on cactus graphs; in contrast, this question is
still open for \src(G). We also observe that deciding whether \vsrc(G) = k
is fixed-parameter tractable in $k$ and the treewidth of $G$. Finally, on
general graphs, we prove that there is no polynomial-time algorithm to decide
whether \vsrc(G) \leq 3 nor to approximate \vsrc(G) within a factor
$n^{1-\varepsilon}$, unless P$=$NP

### Improved Distributed Algorithms for Exact Shortest Paths

Computing shortest paths is one of the central problems in the theory of
distributed computing. For the last few years, substantial progress has been
made on the approximate single source shortest paths problem, culminating in an
algorithm of Becker et al. [DISC'17] which deterministically computes
$(1+o(1))$-approximate shortest paths in $\tilde O(D+\sqrt n)$ time, where $D$
is the hop-diameter of the graph. Up to logarithmic factors, this time
complexity is optimal, matching the lower bound of Elkin [STOC'04].
The question of exact shortest paths however saw no algorithmic progress for
decades, until the recent breakthrough of Elkin [STOC'17], which established a
sublinear-time algorithm for exact single source shortest paths on undirected
graphs. Shortly after, Huang et al. [FOCS'17] provided improved algorithms for
exact all pairs shortest paths problem on directed graphs.
In this paper, we present a new single-source shortest path algorithm with
complexity $\tilde O(n^{3/4}D^{1/4})$. For polylogarithmic $D$, this improves
on Elkin's $\tilde{O}(n^{5/6})$ bound and gets closer to the
$\tilde{\Omega}(n^{1/2})$ lower bound of Elkin [STOC'04]. For larger values of
$D$, we present an improved variant of our algorithm which achieves complexity
$\tilde{O}\left( n^{3/4+o(1)}+ \min\{ n^{3/4}D^{1/6},n^{6/7}\}+D\right)$, and
thus compares favorably with Elkin's bound of $\tilde{O}(n^{5/6} +
n^{2/3}D^{1/3} + D )$ in essentially the entire range of parameters. This
algorithm provides also a qualitative improvement, because it works for the
more challenging case of directed graphs (i.e., graphs where the two directions
of an edge can have different weights), constituting the first sublinear-time
algorithm for directed graphs. Our algorithm also extends to the case of exact
$\kappa$-source shortest paths...Comment: 26 page

### Designing Algorithms for Optimization of Parameters of Functioning of Intelligent System for Radionuclide Myocardial Diagnostics

The influence of the number of complex components of Fast Fourier transformation in analyzing the polar maps of radionuclide examination of myocardium at rest and stress on the functional efficiency of the system of diagnostics of pathologies of myocardium was explored, and there were defined their optimum values in the information sense, which allows increasing the efficiency of the algorithms of forming the diagnostic decision rules by reducing the capacity of the dictionary of features of recognition.The information-extreme sequential cluster algorithms of the selection of the dictionary of features, which contains both quantitative and category features were developed and the results of their work were compared. The modificatios of the algorithms of the selection of the dictionary were suggested, which allows increasing both the search speed of the optimal in the information sense dictionary and reducing its capacity by 40 %. We managed to get the faultless by the training matrix decision rules, the accuracy of which is in the exam mode asymptotically approaches the limit.It was experimentally confirmed that the implementation of the proposed algorithm of the diagnosing system training has allowed to reduce the minimum representative volume of the training matrix from 300 to 81 vectors-implementations of the classes of recognition of the functional myocardium state

Recommended from our members

### Algorithms, Automation, and News

This special issue examines the growing importance of algorithms and automation in the gathering, composition, and distribution of news. It connects a long line of research on journalism and computation with scholarly and professional terrain yet to be explored. Taken as a whole, these articles share some of the noble ambitions of the pioneering publications on âreporting algorithmsâ, such as a desire to see computing help journalists in their watchdog role by holding power to account. However, they also go further, firstly by addressing the fuller range of technologies that computational journalism now consists of: from chatbots and recommender systems, to artificial intelligence and atomised journalism. Secondly, they advance the literature by demonstrating the increased variety of uses for these technologies, including engaging underserved audiences, selling subscriptions, and recombining and re-using content. Thirdly, they problematize computational journalism by, for example, pointing out some of the challenges inherent in applying AI to investigative journalism and in trying to preserve public service values. Fourthly, they offer suggestions for future research and practice, including by presenting a framework for developing democratic news recommenders and another that may help us think about computational journalism in a more integrated, structured manner

### On the use of biased-randomized algorithms for solving non-smooth optimization problems

Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines

- âŠ