15,941 research outputs found
Trip-Based Public Transit Routing Using Condensed Search Trees
We study the problem of planning Pareto-optimal journeys in public transit
networks. Most existing algorithms and speed-up techniques work by computing
subjourneys to intermediary stops until the destination is reached. In
contrast, the trip-based model focuses on trips and transfers between them,
constructing journeys as a sequence of trips. In this paper, we develop a
speed-up technique for this model inspired by principles behind existing
state-of-the-art speed-up techniques, Transfer Pattern and Hub Labelling. The
resulting algorithm allows us to compute Pareto-optimal (with respect to
arrival time and number of transfers) 24-hour profiles on very large real-world
networks in less than half a millisecond. Compared to the current state of the
art for bicriteria queries on public transit networks, this is up to two orders
of magnitude faster, while increasing preprocessing overhead by at most one
order of magnitude
When the path is never shortest: a reality check on shortest path biocomputation
Shortest path problems are a touchstone for evaluating the computing
performance and functional range of novel computing substrates. Much has been
published in recent years regarding the use of biocomputers to solve minimal
path problems such as route optimisation and labyrinth navigation, but their
outputs are typically difficult to reproduce and somewhat abstract in nature,
suggesting that both experimental design and analysis in the field require
standardising. This chapter details laboratory experimental data which probe
the path finding process in two single-celled protistic model organisms,
Physarum polycephalum and Paramecium caudatum, comprising a shortest path
problem and labyrinth navigation, respectively. The results presented
illustrate several of the key difficulties that are encountered in categorising
biological behaviours in the language of computing, including biological
variability, non-halting operations and adverse reactions to experimental
stimuli. It is concluded that neither organism examined are able to efficiently
or reproducibly solve shortest path problems in the specific experimental
conditions that were tested. Data presented are contextualised with biological
theory and design principles for maximising the usefulness of experimental
biocomputer prototypes.Comment: To appear in: Adamatzky, A (Ed.) Shortest path solvers. From software
to wetware. Springer, 201
The Encyclopedia of Neutrosophic Researchers - vol. 3
This is the third volume of the Encyclopedia of Neutrosophic Researchers, edited from materials offered by the authors who responded to the editorâs invitation. The authors are listed alphabetically. The introduction contains a short history of neutrosophics, together with links to the main papers and books. Neutrosophic set, neutrosophic logic, neutrosophic probability, neutrosophic statistics, neutrosophic measure, neutrosophic precalculus, neutrosophic calculus and so on are gaining significant attention in solving many real life problems that involve uncertainty, impreciseness, vagueness, incompleteness, inconsistent, and indeterminacy. In the past years the fields of neutrosophics have been extended and applied in various fields, such as: artificial intelligence, data mining, soft computing, decision making in incomplete / indeterminate / inconsistent information systems, image processing, computational modelling, robotics, medical diagnosis, biomedical engineering, investment problems, economic forecasting, social science, humanistic and practical achievements
Towards an Autonomous Walking Robot for Planetary Surfaces
In this paper, recent progress in the development of
the DLR Crawler - a six-legged, actively compliant walking
robot prototype - is presented. The robot implements
a walking layer with a simple tripod and a more complex
biologically inspired gait. Using a variety of proprioceptive
sensors, different reflexes for reactively crossing obstacles
within the walking height are realised. On top of
the walking layer, a navigation layer provides the ability
to autonomously navigate to a predefined goal point in
unknown rough terrain using a stereo camera. A model
of the environment is created, the terrain traversability is
estimated and an optimal path is planned. The difficulty
of the path can be influenced by behavioral parameters.
Motion commands are sent to the walking layer and the
gait pattern is switched according to the estimated terrain
difficulty. The interaction between walking layer and navigation
layer was tested in different experimental setups
The topology of large Open Connectome networks for the human brain
The structural human connectome (i.e.\ the network of fiber connections in
the brain) can be analyzed at ever finer spatial resolution thanks to advances
in neuroimaging. Here we analyze several large data sets for the human brain
network made available by the Open Connectome Project. We apply statistical
model selection to characterize the degree distributions of graphs containing
up to nodes and edges. A three-parameter
generalized Weibull (also known as a stretched exponential) distribution is a
good fit to most of the observed degree distributions. For almost all networks,
simple power laws cannot fit the data, but in some cases there is statistical
support for power laws with an exponential cutoff. We also calculate the
topological (graph) dimension and the small-world coefficient of
these networks. While suggests a small-world topology, we found that
showing that long-distance connections provide only a small correction
to the topology of the embedding three-dimensional space.Comment: 14 pages, 6 figures, accepted version in Scientific Report
Gunrock: A High-Performance Graph Processing Library on the GPU
For large-scale graph analytics on the GPU, the irregularity of data access
and control flow, and the complexity of programming GPUs have been two
significant challenges for developing a programmable high-performance graph
library. "Gunrock", our graph-processing system designed specifically for the
GPU, uses a high-level, bulk-synchronous, data-centric abstraction focused on
operations on a vertex or edge frontier. Gunrock achieves a balance between
performance and expressiveness by coupling high performance GPU computing
primitives and optimization strategies with a high-level programming model that
allows programmers to quickly develop new graph primitives with small code size
and minimal GPU programming knowledge. We evaluate Gunrock on five key graph
primitives and show that Gunrock has on average at least an order of magnitude
speedup over Boost and PowerGraph, comparable performance to the fastest GPU
hardwired primitives, and better performance than any other GPU high-level
graph library.Comment: 14 pages, accepted by PPoPP'16 (removed the text repetition in the
previous version v5
- âŠ