1,150 research outputs found
Assigning committee seats in mixed-member systems: how important is "localness" compared to the mode of election?
"Committees are important features in legislative decision making. The question of who
serves on what committee is thus an important one. This paper asks about how mixed electoral
systems affect the way committee seats are allocated. Stratmann and Baur (2002) argue that
German parties strategically assign nominally elected legislators to those committees that allow
them to please their local constituents. Our paper questions this argument in light of the
functioning of the German mixed-member system and the individual motivations of German
MPs. We argue that the motivations of German legislators do not necessarily mirror their mode
of election, and that German parties do not necessarily perceive winning nominal votes as a
predominant goal. We hypothesize that German parties aim to increase their vote share on the
list-vote (Zweitstimme) by supporting legislators with a strong local focus independent of their
mode of election. We will test this argument empirically drawing from the German Candidate
Study 2005 and from statistical data on committee membership for the 16th German Bundestag
(2005-2009)." (author's abstract
Rapid pretreatment of Miscanthus using the low-cost ionic liquid triethylammonium hydrogen sulfate at elevated temperatures
Deconstruction with low-cost ionic liquids (ionoSolv) is a promising method to pre-condition lignocellulosic biomass for the production of renewable fuels, materials and chemicals. This study investigated process intensification strategies for the ionoSolv pretreatment of Miscanthus X giganteus using the low-cost ionic liquid triethylammonium hydrogen sulfate ([TEA][HSO4]) in the presence of 20 wt% water, using high temperatures and a high solid to solvent loading of 1:5 g/g. The temperatures investigated were 150, 160, 170 and 180°C. We discuss the effect of pretreatment temperature on lignin and hemicellulose removal, cellulose degradation and enzymatic saccharification yields. We report that very good fractionation can be achieved across all investigated temperatures, including an enzymatic saccharification yield exceeding 75% of the theoretical maximum after only 15 min of treatment at 180°C. We further characterised the recovered lignins which established some tunability of the hydroxyl group content, subunit composition, connectivity and molecular weight distribution in the isolated lignin while maintaining maximum saccharification yield. This drastic reduction of pretreatment time at increased biomass loading without a yield penalty is promising for the development of a commercial ionoSolv pretreatment process
Stimmensplitting und Koalitionswahl
Hat sich die Unabhängigkeitsstrategie der FDP bei der letzten Bundestagswahl ausgezahlt? Wäre die FDP erfolgreicher gewesen, wenn sie im Vorfeld klar signalisiert hätte, dass man eine Koalition mit der Union anstrebt? Wie war das bei den Grünen, die ja im Gegensatz zur FDP keine Zweifel aufkommen ließen? Natürlich können wir nicht wie in einer Simulation oder einem Experiment einfach den Wahlkampf wiederholen und noch einmal wählen lassen. Um eine befriedigende Antwort auf diese Frage zu finden, vergleichen wir den Kontext der Bundestagswahl 2002 mit den zurückliegenden Bundestagswahlen. Aus dem Längsschnittvergleich versuchen wir Rückschlüsse auf den substanziellen Einfluss von strategischem Stimmensplitting im Sinne einer Koalitionswahl auf das Wahlergebnis gerade der kleinen Parteien zu ziehen. Um unsere Forschungsfrage zu beantworten und substanzielle Schlüsse ziehen zu können, muss zuerst klar sein, in welcher Form und warum Stimmensplitting relevant sein kann, welche Rolle dabei Koalitionsabsprachen vor einer jeden Wahl spielen und, schließlich, welche alternativen Erklärungsmöglichkeiten die Literatur zum Thema Stimmensplitting und strategischem Wählen anzubieten hat. Nur wenn wir auch die Wirkung alternativer und zum Teil konkurrierender Hypothesen zulassen, können wir unserer Schlußfolgerungen sicher sein
Astrometric calibration and performance of the Dark Energy Camera
We characterize the ability of the Dark Energy Camera (DECam) to perform
relative astrometry across its 500~Mpix, 3 deg^2 science field of view, and
across 4 years of operation. This is done using internal comparisons of ~4x10^7
measurements of high-S/N stellar images obtained in repeat visits to fields of
moderate stellar density, with the telescope dithered to move the sources
around the array. An empirical astrometric model includes terms for: optical
distortions; stray electric fields in the CCD detectors; chromatic terms in the
instrumental and atmospheric optics; shifts in CCD relative positions of up to
~10 um when the DECam temperature cycles; and low-order distortions to each
exposure from changes in atmospheric refraction and telescope alignment. Errors
in this astrometric model are dominated by stochastic variations with typical
amplitudes of 10-30 mas (in a 30 s exposure) and 5-10 arcmin coherence length,
plausibly attributed to Kolmogorov-spectrum atmospheric turbulence. The size of
these atmospheric distortions is not closely related to the seeing. Given an
astrometric reference catalog at density ~0.7 arcmin^{-2}, e.g. from Gaia, the
typical atmospheric distortions can be interpolated to 7 mas RMS accuracy (for
30 s exposures) with 1 arcmin coherence length for residual errors. Remaining
detectable error contributors are 2-4 mas RMS from unmodelled stray electric
fields in the devices, and another 2-4 mas RMS from focal plane shifts between
camera thermal cycles. Thus the astrometric solution for a single DECam
exposure is accurate to 3-6 mas (0.02 pixels, or 300 nm) on the focal plane,
plus the stochastic atmospheric distortion.Comment: Submitted to PAS
Quasar accretion disk sizes from continuum reverberation mapping in the DES standard-star fields
Measurements of the physical properties of accretion disks in active galactic
nuclei are important for better understanding the growth and evolution of
supermassive black holes. We present the accretion disk sizes of 22 quasars
from continuum reverberation mapping with data from the Dark Energy Survey
(DES) standard star fields and the supernova C fields. We construct continuum
lightcurves with the \textit{griz} photometry that span five seasons of DES
observations. These data sample the time variability of the quasars with a
cadence as short as one day, which corresponds to a rest frame cadence that is
a factor of a few higher than most previous work. We derive time lags between
bands with both JAVELIN and the interpolated cross-correlation function method,
and fit for accretion disk sizes using the JAVELIN Thin Disk model. These new
measurements include disks around black holes with masses as small as
, which have equivalent sizes at 2500\AA \, as small as
light days in the rest frame. We find that most objects have
accretion disk sizes consistent with the prediction of the standard thin disk
model when we take disk variability into account. We have also simulated the
expected yield of accretion disk measurements under various observational
scenarios for the Large Synoptic Survey Telescope Deep Drilling Fields. We find
that the number of disk measurements would increase significantly if the
default cadence is changed from three days to two days or one day.Comment: 33 pages, 24 figure
Recommended from our members
H0LiCOW X: Spectroscopic/imaging survey and galaxy-group identification around the strong gravitational lens system WFI2033-4723
Galaxies and galaxy groups located along the line of sight towards
gravitationally lensed quasars produce high-order perturbations of the
gravitational potential at the lens position. When these perturbation are too
large, they can induce a systematic error on of a few-percent if the lens
system is used for cosmological inference and the perturbers are not explicitly
accounted for in the lens model. In this work, we present a detailed
characterization of the environment of the lens system WFI2033-4723 (, = 0.6575), one of the core targets of the H0LICOW
project for which we present cosmological inferences in a companion paper (Rusu
et al. 2019). We use the Gemini and ESO-Very Large telescopes to measure the
spectroscopic redshifts of the brightest galaxies towards the lens, and use the
ESO-MUSE integral field spectrograph to measure the velocity-dispersion of the
lens ( km/s) and of several nearby
galaxies. In addition, we measure photometric redshifts and stellar masses of
all galaxies down to mag, mainly based on Dark Energy Survey imaging
(DR1). Our new catalog, complemented with literature data, more than doubles
the number of known galaxy spectroscopic redshifts in the direct vicinity of
the lens, expanding to 116 (64) the number of spectroscopic redshifts for
galaxies separated by less than 3 arcmin (2 arcmin) from the lens. Using the
flexion-shift as a measure of the amplitude of the gravitational perturbation,
we identify 2 galaxy groups and 3 galaxies that require specific attention in
the lens models. The ESO MUSE data enable us to measure the
velocity-dispersions of three of these galaxies. These results are essential
for the cosmological inference analysis presented in Rusu et al. (2019).Comment: Matches the version accepted for publication by MNRAS. Note that this
paper previously appeared as H0LICOW X
Phenotypic redshifts with self-organizing maps: A novel method to characterize redshift distributions of source galaxies for weak lensing
Wide-field imaging surveys such as the Dark Energy Survey (DES) rely on
coarse measurements of spectral energy distributions in a few filters to
estimate the redshift distribution of source galaxies. In this regime, sample
variance, shot noise, and selection effects limit the attainable accuracy of
redshift calibration and thus of cosmological constraints. We present a new
method to combine wide-field, few-filter measurements with catalogs from deep
fields with additional filters and sufficiently low photometric noise to break
degeneracies in photometric redshifts. The multi-band deep field is used as an
intermediary between wide-field observations and accurate redshifts, greatly
reducing sample variance, shot noise, and selection effects. Our implementation
of the method uses self-organizing maps to group galaxies into phenotypes based
on their observed fluxes, and is tested using a mock DES catalog created from
N-body simulations. It yields a typical uncertainty on the mean redshift in
each of five tomographic bins for an idealized simulation of the DES Year 3
weak-lensing tomographic analysis of , which is a
60% improvement compared to the Year 1 analysis. Although the implementation of
the method is tailored to DES, its formalism can be applied to other large
photometric surveys with a similar observing strategy.Comment: 24 pages, 11 figures; matches version accepted to MNRA
Transfer learning for galaxy morphology from one survey to another
© 2018 The Author(s). Published by Oxford University Press on behalf of the Royal Astronomical Society.Deep Learning (DL) algorithms for morphological classification of galaxies have proven very successful, mimicking (or even improving) visual classifications. However, these algorithms rely on large training samples of labelled galaxies (typically thousands of them). A key question for using DL classifications in future Big Data surveys is how much of the knowledge acquired from an existing survey can be exported to a new dataset, i.e. if the features learned by the machines are meaningful for different data. We test the performance of DL models, trained with Sloan Digital Sky Survey (SDSS) data, on Dark Energy survey (DES) using images for a sample of 5000 galaxies with a similar redshift distribution to SDSS. Applying the models directly to DES data provides a reasonable global accuracy ( 90%), but small completeness and purity values. A fast domain adaptation step, consisting in a further training with a small DES sample of galaxies (500-300), is enough for obtaining an accuracy > 95% and a significant improvement in the completeness and purity values. This demonstrates that, once trained with a particular dataset, machines can quickly adapt to new instrument characteristics (e.g., PSF, seeing, depth), reducing by almost one order of magnitude the necessary training sample for morphological classification. Redshift evolution effects or significant depth differences are not taken into account in this study.Peer reviewedFinal Accepted Versio
- …