1,142 research outputs found
The Cost of Stability in Coalitional Games
A key question in cooperative game theory is that of coalitional stability,
usually captured by the notion of the \emph{core}--the set of outcomes such
that no subgroup of players has an incentive to deviate. However, some
coalitional games have empty cores, and any outcome in such a game is unstable.
In this paper, we investigate the possibility of stabilizing a coalitional
game by using external payments. We consider a scenario where an external
party, which is interested in having the players work together, offers a
supplemental payment to the grand coalition (or, more generally, a particular
coalition structure). This payment is conditional on players not deviating from
their coalition(s). The sum of this payment plus the actual gains of the
coalition(s) may then be divided among the agents so as to promote stability.
We define the \emph{cost of stability (CoS)} as the minimal external payment
that stabilizes the game.
We provide general bounds on the cost of stability in several classes of
games, and explore its algorithmic properties. To develop a better intuition
for the concepts we introduce, we provide a detailed algorithmic study of the
cost of stability in weighted voting games, a simple but expressive class of
games which can model decision-making in political bodies, and cooperation in
multiagent settings. Finally, we extend our model and results to games with
coalition structures.Comment: 20 pages; will be presented at SAGT'0
Improving visual representations of code
This work was done in 1997 at the Centre for Software Maintenance at the University of DurhamThe contents of this paper describe the work carried out by the Visual Research Group in the Centre for Software Maintenance at the University of Durham.Publisher PD
Recommended from our members
Variability in continuous traffic monitoring data
Each state in the United States can be viewed as a universe of road segments. For each road segment in each state, it is desired to know various traffic characteristics based on count data, classification count data, and weigh-in-motion data. These data are absolutely essential for highway design, maintenance, safety, and planning. Given no cost constraints, each road segment would be continuously monitored every day of the year. However, in practice, a few road segments are monitored continuously every day of the year to produce annual characteristics of traffic flow. The remaining road segments are monitored for one or two days each year, and this resulting data are `adjusted` (using factors based on data collected from the continuously monitored road segments) to produce estimates of annual characteristics. With this general approach, each state strives to provide estimates of annual characteristics for each road segment within its jurisdiction. In 1985, the Federal Highway Administration (FHWA) published the Traffic Monitoring Guide to assist states in achieving this end. As with almost any data collection effort, the monitoring data suffers from errors from many sources. In this paper, we report some empirical findings in a research project sponsored by the FHWA. This research project studied the variability in the traffic data from the continuously monitored road segments from state(s) and, the extent to which this variability is transferred to and affects the precision of the data produced from the road segments which are monitored only one or two days each year. The ultimate hope is that states will eventually be able to not only publish an estimate of a characteristic such as Average Annual Daily Traffic (AADT) for each road segment, but also that each estimate will be accompanied by a statement expressing how good the estimate is in terms of its estimated variability or precision, which will likely be expressed as a coefficient of variation
Prevention of childhood poisoning in the home: overview of systematic reviews and a systematic review of primary studies
Unintentional poisoning is a significant child public health problem. This systematic overview of reviews, supplemented with a systematic review of recently published primary studies synthesizes evidence on non-legislative interventions to reduce childhood poisonings in the home with particular reference to interventions that could be implemented by Children's Centres in England or community health or social care services in other high income countries. Thirteen systematic reviews, two meta-analyses and 47 primary studies were identified. The interventions most commonly comprised education, provision of cupboard/drawer locks, and poison control centre (PCC) number stickers. Meta-analyses and primary studies provided evidence that interventions improved poison prevention practices. Twenty eight per cent of studies reporting safe medicine storage (OR from meta-analysis 1.57, 95% CI 1.22–2.02), 23% reporting safe storage of other products (OR from meta-analysis 1.63, 95% CI 1.22–2.17) and 46% reporting availability of PCC numbers (OR from meta-analysis 3.67, 95% CI 1.84–7.33) demonstrated significant effects favouring the intervention group. There was a lack of evidence that interventions reduced poisoning rates. Parents should be provided with poison prevention education, cupboard/drawer locks and emergency contact numbers to use in the event of a poisoning. Further research is required to determine whether improving poison prevention practices reduces poisoning rates
Deriving a preference-based utility measure for cancer patients from the European Organisation for the Research and Treatment of Cancer's Quality of Life Questionnaire C30: a confirmatory versus exploratory approach
Background: Multi attribute utility instruments (MAUIs) are preference-based measures that
comprise a health state classification system (HSCS) and a scoring algorithm that assigns a utility
value to each health state in the HSCS. When developing a MAUI from a health-related quality
of life (HRQOL) questionnaire, first a HSCS must be derived. This typically involves selecting
a subset of domains and items because HRQOL questionnaires typically have too many items
to be amendable to the valuation task required to develop the scoring algorithm for a MAUI.
Currently, exploratory factor analysis (EFA) followed by Rasch analysis is recommended for
deriving a MAUI from a HRQOL measure.
Aim: To determine whether confirmatory factor analysis (CFA) is more appropriate and efficient
than EFA to derive a HSCS from the European Organisation for the Research and Treatment
of Cancer’s core HRQOL questionnaire, Quality of Life Questionnaire (QLQ-C30), given its
well-established domain structure.
Methods: QLQ-C30 (Version 3) data were collected from 356 patients receiving palliative
radiotherapy for recurrent/metastatic cancer (various primary sites). The dimensional structure
of the QLQ-C30 was tested with EFA and CFA, the latter informed by the established QLQC30
structure and views of both patients and clinicians on which are the most relevant items.
Dimensions determined by EFA or CFA were then subjected to Rasch analysis.
Results: CFA results generally supported the proposed QLQ-C30 structure (comparative fit
index =0.99, Tucker–Lewis index =0.99, root mean square error of approximation =0.04). EFA
revealed fewer factors and some items cross-loaded on multiple factors. Further assessment
of dimensionality with Rasch analysis allowed better alignment of the EFA dimensions with
those detected by CFA.
Conclusion: CFA was more appropriate and efficient than EFA in producing clinically interpretable
results for the HSCS for a proposed new cancer-specific MAUI. Our findings suggest
that CFA should be recommended generally when deriving a preference-based measure from a
HRQOL measure that has an established domain structure
Addendum to: Capillary floating and the billiard ball problem
We compare the results of our earlier paper on the floating in neutral
equilibrium at arbitrary orientation in the sense of Finn-Young with the
literature on its counterpart in the sense of Archimedes. We add a few remarks
of personal and social-historical character.Comment: This is an addendum to my article Capillary floating and the billiard
ball problem, Journal of Mathematical Fluid Mechanics 14 (2012), 363 -- 38
Multiresolution analysis of active region magnetic structure and its correlation with the Mt. Wilson classification and flaring activity
Two different multi-resolution analyses are used to decompose the structure
of active region magnetic flux into concentrations of different size scales.
Lines separating these opposite polarity regions of flux at each size scale are
found. These lines are used as a mask on a map of the magnetic field gradient
to sample the local gradient between opposite polarity regions of given scale
sizes. It is shown that the maximum, average and standard deviation of the
magnetic flux gradient for alpha, beta, beta-gamma and beta-gamma-delta active
regions increase in the order listed, and that the order is maintained over all
length-scales. This study demonstrates that, on average, the Mt. Wilson
classification encodes the notion of activity over all length-scales in the
active region, and not just those length-scales at which the strongest flux
gradients are found. Further, it is also shown that the average gradients in
the field, and the average length-scale at which they occur, also increase in
the same order. Finally, there are significant differences in the gradient
distribution, between flaring and non-flaring active regions, which are
maintained over all length-scales. It is also shown that the average gradient
content of active regions that have large flares (GOES class 'M' and above) is
larger than that for active regions containing flares of all flare sizes; this
difference is also maintained at all length-scales.Comment: Accepted for publication in Solar Physic
Particle Dark Matter Constraints from the Draco Dwarf Galaxy
It is widely thought that neutralinos, the lightest supersymmetric particles,
could comprise most of the dark matter. If so, then dark halos will emit radio
and gamma ray signals initiated by neutralino annihilation. A particularly
promising place to look for these indicators is at the center of the local
group dwarf spheroidal galaxy Draco, and recent measurements of the motion of
its stars have revealed it to be an even better target for dark matter
detection than previously thought. We compute limits on WIMP properties for
various models of Draco's dark matter halo. We find that if the halo is nearly
isothermal, as the new measurements indicate, then current gamma ray flux
limits prohibit much of the neutralino parameter space. If Draco has a moderate
magnetic field, then current radio limits can rule out more of it. These
results are appreciably stronger than other current constraints, and so
acquiring more detailed data on Draco's density profile becomes one of the most
promising avenues for identifying dark matter.Comment: 13 pages, 6 figure
Adsorption of mono- and multivalent cat- and anions on DNA molecules
Adsorption of monovalent and multivalent cat- and anions on a deoxyribose
nucleic acid (DNA) molecule from a salt solution is investigated by computer
simulation. The ions are modelled as charged hard spheres, the DNA molecule as
a point charge pattern following the double-helical phosphate strands. The
geometrical shape of the DNA molecules is modelled on different levels ranging
from a simple cylindrical shape to structured models which include the major
and minor grooves between the phosphate strands. The densities of the ions
adsorbed on the phosphate strands, in the major and in the minor grooves are
calculated. First, we find that the adsorption pattern on the DNA surface
depends strongly on its geometrical shape: counterions adsorb preferentially
along the phosphate strands for a cylindrical model shape, but in the minor
groove for a geometrically structured model. Second, we find that an addition
of monovalent salt ions results in an increase of the charge density in the
minor groove while the total charge density of ions adsorbed in the major
groove stays unchanged. The adsorbed ion densities are highly structured along
the minor groove while they are almost smeared along the major groove.
Furthermore, for a fixed amount of added salt, the major groove cationic charge
is independent on the counterion valency. For increasing salt concentration the
major groove is neutralized while the total charge adsorbed in the minor groove
is constant. DNA overcharging is detected for multivalent salt. Simulations for
a larger ion radii, which mimic the effect of the ion hydration, indicate an
increased adsorbtion of cations in the major groove.Comment: 34 pages with 14 figure
Astroparticle Physics with a Customized Low-Background Broad Energy Germanium Detector
The MAJORANA Collaboration is building the MAJORANA DEMONSTRATOR, a 60 kg
array of high purity germanium detectors housed in an ultra-low background
shield at the Sanford Underground Laboratory in Lead, SD. The MAJORANA
DEMONSTRATOR will search for neutrinoless double-beta decay of 76Ge while
demonstrating the feasibility of a tonne-scale experiment. It may also carry
out a dark matter search in the 1-10 GeV/c^2 mass range. We have found that
customized Broad Energy Germanium (BEGe) detectors produced by Canberra have
several desirable features for a neutrinoless double-beta decay experiment,
including low electronic noise, excellent pulse shape analysis capabilities,
and simple fabrication. We have deployed a customized BEGe, the MAJORANA
Low-Background BEGe at Kimballton (MALBEK), in a low-background cryostat and
shield at the Kimballton Underground Research Facility in Virginia. This paper
will focus on the detector characteristics and measurements that can be
performed with such a radiation detector in a low-background environment.Comment: Submitted to NIMA Proceedings, SORMA XII. 9 pages, 4 figure
- …