917 research outputs found
A Gibbsian model for message routeing in highly dense multihop networks
We investigate a probabilistic model for routeing of messages in
relay-augmented multihop ad-hoc networks, where each transmitter sends one
message to the origin. Given the (random) transmitter locations, we weight the
family of random, uniformly distributed message trajectories by an exponential
probability weight, favouring trajectories with low interference (measured in
terms of signal-to-interference ratio) and trajectory families with little
congestion (measured in terms of the number of pairs of hops using the same
relay). Under the resulting Gibbs measure, the system targets the best
compromise between entropy, interference and congestion for a common welfare,
instead of an optimization of the individual trajectories.
In the limit of high spatial density of users, we describe the totality of
all the message trajectories in terms of empirical measures. Employing large
deviations arguments, we derive a characteristic variational formula for the
limiting free energy and analyse the minimizer(s) of the formula, which
describe the most likely shapes of the trajectory flow. The empirical measures
of the message trajectories well describe the interference, but not the
congestion; the latter requires introducing an additional empirical measure.
Our results remain valid under replacing the two penalization terms by more
general functionals of these two empirical measures.Comment: 40 page
Human-Machine Collaborative Optimization via Apprenticeship Scheduling
Coordinating agents to complete a set of tasks with intercoupled temporal and
resource constraints is computationally challenging, yet human domain experts
can solve these difficult scheduling problems using paradigms learned through
years of apprenticeship. A process for manually codifying this domain knowledge
within a computational framework is necessary to scale beyond the
``single-expert, single-trainee" apprenticeship model. However, human domain
experts often have difficulty describing their decision-making processes,
causing the codification of this knowledge to become laborious. We propose a
new approach for capturing domain-expert heuristics through a pairwise ranking
formulation. Our approach is model-free and does not require enumerating or
iterating through a large state space. We empirically demonstrate that this
approach accurately learns multifaceted heuristics on a synthetic data set
incorporating job-shop scheduling and vehicle routing problems, as well as on
two real-world data sets consisting of demonstrations of experts solving a
weapon-to-target assignment problem and a hospital resource allocation problem.
We also demonstrate that policies learned from human scheduling demonstration
via apprenticeship learning can substantially improve the efficiency of a
branch-and-bound search for an optimal schedule. We employ this human-machine
collaborative optimization technique on a variant of the weapon-to-target
assignment problem. We demonstrate that this technique generates solutions
substantially superior to those produced by human domain experts at a rate up
to 9.5 times faster than an optimization approach and can be applied to
optimally solve problems twice as complex as those solved by a human
demonstrator.Comment: Portions of this paper were published in the Proceedings of the
International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and
in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper
consists of 50 pages with 11 figures and 4 table
Social Exclusion Orderings
We consider the problem of measuring social exclusion using qualitative data. We suggest a class of social exclusion indicators deriving the partial orderings associated with dominace for these indicators. We characterize the set of transformations on the distribution of individual deprivation scores underlying the dominace conditions proposed.Social esclusion, dominance, measures.
Data-informed fuzzy measures for fuzzy integration of intervals and fuzzy numbers
The fuzzy integral (FI) with respect to a fuzzy measure (FM) is a powerful means of aggregating information. The most popular FIs are the Choquet and Sugeno, and most research focuses on these two variants. The arena of the FM is much more populated, including numerically derived FMs such as the Sugeno λ-measure and decomposable measure, expert-defined FMs, and data-informed FMs. The drawback of numerically derived and expert-defined FMs is that one must know something about the relative values of the input sources. However, there are many problems where this information is unavailable, such as crowdsourcing. This paper focuses on data-informed FMs, or those FMs that are computed by an algorithm that analyzes some property of the input data itself, gleaning the importance of each input source by the data they provide. The original instantiation of a data-informed FM is the agreement FM, which assigns high confidence to combinations of sources that numerically agree with one another. This paper extends upon our previous work in datainformed FMs by proposing the uniqueness measure and additive measure of agreement for interval-valued evidence. We then extend data-informed FMs to fuzzy number (FN)-valued inputs. We demonstrate the proposed FMs by aggregating interval and FN evidence with the Choquet and Sugeno FIs for both synthetic and real-world data
Some open problems on geometric separability
En aquest projecte hem considerat dos problemes oberts de separació de punts vermells i blaus en el pla, contextualitzats en el camp de la geometria computacional. Partint de resultats ja coneguts, hem estès i millorat els algoritmes, concretament per la separabilitat emprant 4 rectes paral·leles en tires monocromàtiques. I hem analitzat condicions suficients per a aquests criteris de separabilitat.In this project, we have tackled two open questions regarding the separability of red and blue points in the plane, from the framework of computational geometry. Building on existing results, we have extended and improved algorithms, specifically for the separability using 4 parallel lines that define monochromatic strips. Also, sufficient conditions to meet these separability criteria have been studied
A morphospace of functional configuration to assess configural breadth based on brain functional networks
The best approach to quantify human brain functional reconfigurations in
response to varying cognitive demands remains an unresolved topic in network
neuroscience. We propose that such functional reconfigurations may be
categorized into three different types: i) Network Configural Breadth, ii)
Task-to-Task transitional reconfiguration, and iii) Within-Task
reconfiguration. In order to quantify these reconfigurations, we propose a
mesoscopic framework focused on functional networks (FNs) or communities. To do
so, we introduce a 2D network morphospace that relies on two novel mesoscopic
metrics, Trapping Efficiency (TE) and Exit Entropy (EE), which capture topology
and integration of information within and between a reference set of FNs. In
this study, we use this framework to quantify the Network Configural Breadth
across different tasks. We show that the metrics defining this morphospace can
differentiate FNs, cognitive tasks and subjects. We also show that network
configural breadth significantly predicts behavioral measures, such as episodic
memory, verbal episodic memory, fluid intelligence and general intelligence. In
essence, we put forth a framework to explore the cognitive space in a
comprehensive manner, for each individual separately, and at different levels
of granularity. This tool that can also quantify the FN reconfigurations that
result from the brain switching between mental states.Comment: main article: 24 pages, 8 figures, 2 tables. supporting information:
11 pages, 5 figure
Efficiently Discovering Locally Exceptional yet Globally Representative Subgroups
Subgroup discovery is a local pattern mining technique to find interpretable descriptions of sub-populations that stand out on a given target variable. That is, these sub-populations are exceptional with regard to the global distribution. In this paper we argue that in many applications, such as scientific discovery, subgroups are only useful if they are additionally representative of the global distribution with regard to a control variable. That is, when the distribution of this control variable is the same, or almost the same, as over the whole data. We formalise this objective function and give an efficient algorithm to compute its tight optimistic estimator for the case of a numeric target and a binary control variable. This enables us to use the branch-and-bound framework to efficiently discover the top- subgroups that are both exceptional as well as representative. Experimental evaluation on a wide range of datasets shows that with this algorithm we discover meaningful representative patterns and are up to orders of magnitude faster in terms of node evaluations as well as time
- …