56,947 research outputs found
Approximation algorithms for hard variants of the stable marriage and hospitals/residents problems
When ties and incomplete preference lists are permitted in the Stable Marriage and Hospitals/Residents problems, stable matchings can have different sizes. The problem of finding a maximum cardinality stable matching in this context is known to be NP-hard, even under very severe restrictions on the number, size and position of ties. In this paper, we describe polynomial-time 5/3-approximation algorithms for variants of these problems in which ties are on one side only and at the end of the preference lists. The particular variant is motivated by important applications in large scale centralised matching schemes
The Universe is not a Computer
When we want to predict the future, we compute it from what we know about the
present. Specifically, we take a mathematical representation of observed
reality, plug it into some dynamical equations, and then map the time-evolved
result back to real-world predictions. But while this computational process can
tell us what we want to know, we have taken this procedure too literally,
implicitly assuming that the universe must compute itself in the same manner.
Physical theories that do not follow this computational framework are deemed
illogical, right from the start. But this anthropocentric assumption has
steered our physical models into an impossible corner, primarily because of
quantum phenomena. Meanwhile, we have not been exploring other models in which
the universe is not so limited. In fact, some of these alternate models already
have a well-established importance, but are thought to be mathematical tricks
without physical significance. This essay argues that only by dropping our
assumption that the universe is a computer can we fully develop such models,
explain quantum phenomena, and understand the workings of our universe. (This
essay was awarded third prize in the 2012 FQXi essay contest; a new afterword
compares and contrasts this essay with Robert Spekkens' first prize entry.)Comment: 10 pages with new afterword; matches published versio
Rank Maximal Matchings -- Structure and Algorithms
Let G = (A U P, E) be a bipartite graph where A denotes a set of agents, P
denotes a set of posts and ranks on the edges denote preferences of the agents
over posts. A matching M in G is rank-maximal if it matches the maximum number
of applicants to their top-rank post, subject to this, the maximum number of
applicants to their second rank post and so on.
In this paper, we develop a switching graph characterization of rank-maximal
matchings, which is a useful tool that encodes all rank-maximal matchings in an
instance. The characterization leads to simple and efficient algorithms for
several interesting problems. In particular, we give an efficient algorithm to
compute the set of rank-maximal pairs in an instance. We show that the problem
of counting the number of rank-maximal matchings is #P-Complete and also give
an FPRAS for the problem. Finally, we consider the problem of deciding whether
a rank-maximal matching is popular among all the rank-maximal matchings in a
given instance, and give an efficient algorithm for the problem
Grade 3 Giant Cell Tumour of the Distal Humeral Epiphysis Treated with Intralesional Curettage, High Speed Burring and Bone Grafting: A Case Report
Recommended from our members
Constant depth microfluidic networks based on a generalised Murray’s law for Newtonian and power-law fluids
This paper was presented at the 4th Micro and Nano Flows Conference (MNF2014), which was held at University College, London, UK. The conference was organised by Brunel University and supported by the Italian Union of Thermofluiddynamics, IPEM, the Process Intensification Network, the Institution of Mechanical Engineers, the Heat Transfer Society, HEXAG - the Heat Exchange Action Group, and the Energy Institute, ASME Press, LCN London Centre for Nanotechnology, UCL University College London, UCL Engineering, the International NanoScience Community, www.nanopaprika.eu.Microfluidic bifurcating networks of rectangular cross-sectional channels are designed
using a novel biomimetic rule, based on Murray’s law. Murray’s principle is extended to
consider the flow of power-law fluids in planar geometries (i.e. of constant depth rectangular
cross-section) typical of lab-on-a-chip applications. The proposed design offers the ability to
control precisely the shear-stress distributions and to predict the flow resistance along the network.
We use an in-house code to perform computational fluid dynamics simulations in order
to assess the extent of the validity of the proposed design for Newtonian, shear-thinning and
shear-thickening fluids under different flow conditions
The Stable Roommates problem with short lists
We consider two variants of the classical Stable Roommates problem with
Incomplete (but strictly ordered) preference lists SRI that are degree
constrained, i.e., preference lists are of bounded length. The first variant,
EGAL d-SRI, involves finding an egalitarian stable matching in solvable
instances of SRI with preference lists of length at most d. We show that this
problem is NP-hard even if d=3. On the positive side we give a
(2d+3)/7-approximation algorithm for d={3,4,5} which improves on the known
bound of 2 for the unbounded preference list case. In the second variant of
SRI, called d-SRTI, preference lists can include ties and are of length at most
d. We show that the problem of deciding whether an instance of d-SRTI admits a
stable matching is NP-complete even if d=3. We also consider the "most stable"
version of this problem and prove a strong inapproximability bound for the d=3
case. However for d=2 we show that the latter problem can be solved in
polynomial time.Comment: short version appeared at SAGT 201
Recommended from our members
A viable mouse model of factor X deficiency provides evidence for maternal transfer of factor X.
BackgroundActivated factor X (FXa) is a vitamin K-dependent serine protease that plays a pivotal role in blood coagulation by converting prothrombin to thrombin. There are no reports of humans with complete deficiency of FX, and knockout of murine F10 is embryonic or perinatal lethal.ObjectiveWe sought to generate a viable mouse model of FX deficiency.MethodsWe used a socket-targeting construct to generate F10-knockout mice by eliminating F10 exon 8 (knockout allele termed F10(tm1Ccmt), abbreviated as '-'; wild-type '+'), and a plug-targeting construct to generate mice expressing a FX variant with normal antigen levels but low levels of FX activity [4-9% normal in humans carrying the defect, Pro343-->Ser, termed FX Friuli (mutant allele termed F10(tm2Ccmt), abbreviated as F)].ResultsF10 knockout mice exhibited embryonic or perinatal lethality. In contrast, homozygous Friuli mice [F10 (F/F)] had FX activity levels of approximately 5.5% (sufficient to rescue both embryonic and perinatal lethality), but developed age-dependent iron deposition and cardiac fibrosis. Interestingly, F10 (-/F) mice with FX activity levels of 1-3% also showed complete rescue of lethality. Further study of this model provides evidence supporting a role of maternal FX transfer in the embryonic survival.ConclusionsWe demonstrate that, while complete absence of FX is incompatible with murine survival, minimal FX activity as low as 1-3% is sufficient to rescue the lethal phenotype. This viable low-FX mouse model will facilitate the development of FX-directed therapies as well as investigation of the FX role in embryonic development
Automatic, fast and robust characterization of noise distributions for diffusion MRI
Knowledge of the noise distribution in magnitude diffusion MRI images is the
centerpiece to quantify uncertainties arising from the acquisition process. The
use of parallel imaging methods, the number of receiver coils and imaging
filters applied by the scanner, amongst other factors, dictate the resulting
signal distribution. Accurate estimation beyond textbook Rician or noncentral
chi distributions often requires information about the acquisition process
(e.g. coils sensitivity maps or reconstruction coefficients), which is not
usually available. We introduce a new method where a change of variable
naturally gives rise to a particular form of the gamma distribution for
background signals. The first moments and maximum likelihood estimators of this
gamma distribution explicitly depend on the number of coils, making it possible
to estimate all unknown parameters using only the magnitude data. A rejection
step is used to make the method automatic and robust to artifacts. Experiments
on synthetic datasets show that the proposed method can reliably estimate both
the degrees of freedom and the standard deviation. The worst case errors range
from below 2% (spatially uniform noise) to approximately 10% (spatially
variable noise). Repeated acquisitions of in vivo datasets show that the
estimated parameters are stable and have lower variances than compared methods.Comment: v2: added publisher DOI statement, fixed text typo in appendix A
Recommended from our members
The assessment of the implementation of fuel related legislations and their impact on air quality and public health
The main focus of Work Package 6 of the Aphekom project was: to develop innovative methods to analyse the decrease in air pollution levels following implementation of an European regulation to reduce the sulphur content in liquid fuels; to follow the evolution of health risks over time; to track related effect modifiers; and to quantify the monetary costs of health impacts of the implemented regulation
Revisiting the relativistic ejection event in XTE J1550-564 during the 1998 outburst
We revisit the discovery outburst of the X-ray transient XTE J1550−564 during which relativistic jets were observed in 1998 September, and review the radio images obtained with the Australian Long Baseline Array, and light curves obtained with the Molonglo Observatory Synthesis Telescope and the Australia Telescope Compact Array. Based on Hi spectra, we constrain the source distance to between 3.3 and 4.9 kpc. The radio images, taken some 2 d apart, show the evolution of an ejection event. The apparent separation velocity of the two outermost ejecta is at least 1.3c and may be as large as 1.9c; when relativistic effects are taken into account, the inferred true velocity is ≥ 0.8c. The flux densities appear to peak simultaneously during the outburst, with a rather flat (although still optically thin) spectral index of −0.2
- …