337 research outputs found
Advances on Matroid Secretary Problems: Free Order Model and Laminar Case
The most well-known conjecture in the context of matroid secretary problems
claims the existence of a constant-factor approximation applicable to any
matroid. Whereas this conjecture remains open, modified forms of it were shown
to be true, when assuming that the assignment of weights to the secretaries is
not adversarial but uniformly random (Soto [SODA 2011], Oveis Gharan and
Vondr\'ak [ESA 2011]). However, so far, there was no variant of the matroid
secretary problem with adversarial weight assignment for which a
constant-factor approximation was found. We address this point by presenting a
9-approximation for the \emph{free order model}, a model suggested shortly
after the introduction of the matroid secretary problem, and for which no
constant-factor approximation was known so far. The free order model is a
relaxed version of the original matroid secretary problem, with the only
difference that one can choose the order in which secretaries are interviewed.
Furthermore, we consider the classical matroid secretary problem for the
special case of laminar matroids. Only recently, a constant-factor
approximation has been found for this case, using a clever but rather involved
method and analysis (Im and Wang, [SODA 2011]) that leads to a
16000/3-approximation. This is arguably the most involved special case of the
matroid secretary problem for which a constant-factor approximation is known.
We present a considerably simpler and stronger -approximation, based on reducing the problem to a matroid secretary
problem on a partition matroid
'Un Bon Dessin Vaut Mieux Qu'un Long Discours' : the role and impact of cartoons in contemporary France
Cartoons have traditionally occupied an important place in French visual culture, and are now a permanent feature in even the most prestigious publications, including Le Monde, where they appear on the front page. Moreover, there is a long tradition of political cartooning which is firmly situated within the historical context of caricature and lampooning, which over the years has contributed to public debates on key issues such as politics, religion and social change. In this thesis, I focus on political cartoons and argue that the political cartoon is still significant as a cultural product and as a powerful journalistic medium at a time when the existence of the print media is threatened by new technological developments. In order to understand how cartoons remain a powerful mode of expression in the twenty-first century, I begin by examining the historical development of cartooning, tracing its origins in grotesque art, physiognomy and caricature. I then explore a number of events in early modern European history such as the Reformation and the French Revolution to show that the medium was used as a means of mass communication, to inform a largely illiterate public, incite protest and instigate rebellion through propaganda. I show how political graphics were used as effective political weapons against the ruling authorities, in the face of tight regulation such as censorship, and underline the French artists' commitment to defend their right of expression. As I demonstrate, this commitment continues to be pursued by contemporary French cartoonists such as Plantu who is dedicated to fighting for freedom of expression and promoting peace issues, under the banner of Le Monde and the United Nations. In analysing a corpus of Plantu's editorial creations, I underline theoretical perspectives for ‘reading' cartoons and illuminate the visual rhetoric used by cartoonists to communicate serious issues. I conclude with an assessment of the significant role that French cartoonists played during the 2006 Cartoons War to further highlight the impact of cartoons as a vehicle for political communication, and as a catalyst for debate in the twenty first century.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Plausibility functions and exact frequentist inference
In the frequentist program, inferential methods with exact control on error
rates are a primary focus. The standard approach, however, is to rely on
asymptotic approximations, which may not be suitable. This paper presents a
general framework for the construction of exact frequentist procedures based on
plausibility functions. It is shown that the plausibility function-based tests
and confidence regions have the desired frequentist properties in finite
samples---no large-sample justification needed. An extension of the proposed
method is also given for problems involving nuisance parameters. Examples
demonstrate that the plausibility function-based method is both exact and
efficient in a wide variety of problems.Comment: 21 pages, 5 figures, 3 table
Leptogenesis and rescattering in supersymmetric models
The observed baryon asymmetry of the Universe can be due to the
violating decay of heavy right handed (s)neutrinos. The amount of the asymmetry
depends crucially on their number density. If the (s)neutrinos are generated
thermally, in supersymmetric models there is limited parameter space leading to
enough baryons. For this reason, several alternative mechanisms have been
proposed. We discuss the nonperturbative production of sneutrino quanta by a
direct coupling to the inflaton. This production dominates over the
corresponding creation of neutrinos, and it can easily (i.e. even for a rather
small inflaton-sneutrino coupling) lead to a sufficient baryon asymmetry. We
then study the amplification of MSSM degrees of freedom, via their coupling to
the sneutrinos, during the rescattering phase which follows the nonperturbative
production. This process, which mainly influences the (MSSM) flat
directions, is very efficient as long as the sneutrinos quanta are in the
relativistic regime. The rapid amplification of the light degrees of freedom
may potentially lead to a gravitino problem. We estimate the gravitino
production by means of a perturbative calculation, discussing the regime in
which we expect it to be reliable.Comment: (20 pages, 6 figures), references added, typos corrected. Final
version in revte
Combining Optimization and Randomization Approaches for the Design of Clinical Trials
t Intentional sampling methods are non-randomized procedures that select
a group of individuals for a sample with the purpose of meeting specific prescribed
criteria. In this paper we extend previous works related to intentional sampling,
and address the problem of sequential allocation for clinical trials with few patients.
Roughly speaking, patients are enrolled sequentially, according to the order in which they start the treatment at the clinic or hospital. The allocation problem consists in assigning each new patient to one, and only one, of the alternative treatment arms. The main requisite is that the profiles in the alternative arms remain similar with respect to some relevant patients’ attributes (age, gender, disease, symptom severity and others). We perform numerical experiments based on a real case study and discuss how to conveniently set up perturbation parameters, in order to yield a suitable balance between optimality – the similarity among the relative frequencies of patients in the several categories for both arms, and decoupling – the absence of a tendency to allocate each pair of patients consistently to the same arm
Updated Nucleosynthesis Constraints on Unstable Relic Particles
We revisit the upper limits on the abundance of unstable massive relic
particles provided by the success of Big-Bang Nucleosynthesis calculations. We
use the cosmic microwave background data to constrain the baryon-to-photon
ratio, and incorporate an extensively updated compilation of cross sections
into a new calculation of the network of reactions induced by electromagnetic
showers that create and destroy the light elements deuterium, he3, he4, li6 and
li7. We derive analytic approximations that complement and check the full
numerical calculations. Considerations of the abundances of he4 and li6 exclude
exceptional regions of parameter space that would otherwise have been permitted
by deuterium alone. We illustrate our results by applying them to massive
gravitinos. If they weigh ~100 GeV, their primordial abundance should have been
below about 10^{-13} of the total entropy. This would imply an upper limit on
the reheating temperature of a few times 10^7 GeV, which could be a potential
difficulty for some models of inflation. We discuss possible ways of evading
this problem.Comment: 40 pages LaTeX, 18 eps figure
Factors affecting body temperatures of toads
Factors influencing levels and rates of variation of body temperature ( T b ) in montane Bufo boreas boreas and in lowland Bufo boreas halophilus were investigated as an initial step toward understanding the role of natural thermal variation in the physiology and energetics of these ectothermic animals. Body temperatures of boreas can vary 25–30° C over 24-h periods. Such variation is primarily due to both nocturnal and diurnal activity and the physical characteristics of the montane environment. Bufo boreas halophilus are primarily nocturnal except during breeding and are voluntarily active at body temperatures ranging between 10 and 25° C. Despite variation in T b encountered in the field, boreas select a narrow range of T b in a thermal gradient, averaging 23.5 and 26.2° C for fasted individuals maintained under field conditions or acclimated to 20° C, respectively. In a thermal gradient the mean T b of fasted halophilus acclimated to 20° C is 23.9° C. Skin color of boreas varies in the field from very dark to light. The dark skins absorb approximately 4% more radiation than the light ones. Light colored boreas should absorb approximately 5% more radiation than similarly colored halophilus . Evaporative water losses increase directly with skin temperatures and vapor pressure deficit in both subspecies. Larger individuals heat and cool more slowly than smaller ones. Calculation of an enery budget for boreal toads suggests that they could sit in direct sunlight for long periods without fatally overheating, providing the skin was continually moist.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/47722/1/442_2004_Article_BF00344732.pd
Deep generative models for fast photon shower simulation in ATLAS
The need for large-scale production of highly accurate simulated event samples for the extensive physics programme of the ATLAS experiment at the Large Hadron Collider motivates the development of new simulation techniques. Building on the recent success of deep learning algorithms, variational autoencoders and generative adversarial networks are investigated for modelling the response of the central region of the ATLAS electromagnetic calorimeter to photons of various energies. The properties of synthesised showers are compared with showers from a full detector simulation using geant4. Both variational autoencoders and generative adversarial networks are capable of quickly simulating electromagnetic showers with correct total energies and stochasticity, though the modelling of some shower shape distributions requires more refinement. This feasibility study demonstrates the potential of using such algorithms for ATLAS fast calorimeter simulation in the future and shows a possible way to complement current simulation techniques
Search for pair production of boosted Higgs bosons via vector-boson fusion in the bb¯bb¯ final state using pp collisions at √s = 13 TeV with the ATLAS detector
A search for Higgs boson pair production via vector-boson fusion is performed in the Lorentz-boosted regime,
where a Higgs boson candidate is reconstructed as a single large-radius jet, using 140 fb−1 of proton–proton
collision data at √s = 13 TeV recorded by the ATLAS detector at the Large Hadron Collider. Only Higgs boson
decays into bottom quark pairs are considered. The search is particularly sensitive to the quartic coupling between
two vector bosons and two Higgs bosons relative to its Standard Model prediction, K2V . This study constrains K2V
to 0.55 < K2V < 1.49 at the 95% confidence level. The value K2V = 0 is excluded with a significance of 3.8 standard
deviations with other Higgs boson couplings fixed to their Standard Model values. A search for new heavy spin-0
resonances that would mediate Higgs boson pair production via vector-boson fusion is carried out in the mass
range of 1–5 TeV for the first time under several model and decay-width assumptions. No significant deviation
from the Standard Model hypothesis is observed and exclusion limits at the 95% confidence level are derived
- …