8 research outputs found
Few smooth d-polytopes with n lattice points
We prove that, for fixed n there exist only finitely many embeddings of
Q-factorial toric varieties X into P^n that are induced by a complete linear
system. The proof is based on a combinatorial result that for fixed nonnegative
integers d and n, there are only finitely many smooth d-polytopes with n
lattice points. We also enumerate all smooth 3-polytopes with at most 12
lattice points. In fact, it is sufficient to bound the singularities and the
number of lattice points on edges to prove finiteness.Comment: 20+2 pages; major revision: new author, new structure, new result
Mathematisches Forschungsinstitut Oberwolfach Report No. 20/2006
Abstract. The main aim of fine structure theory and inner model theory can be summarized as the construction of models which have a canonical inner structure (a fine structure), making it possible to analyze them in great detail, and which at the same time reflect important aspects of the surrounding mathematical universe, in that they satisfy certain strong axioms of infinity, or contain complicated sets of reals. Applications range from obtaining lower bounds on the consistency strength of all sorts of set theoretic principles in terms of large cardinals, to proving the consistency of certain combinatorial properties, their compatibility with strong axioms of infinity, or outright proving results in descriptive set theory (for which no proofs avoiding fine structure and inner models are in sight). Fine structure theory and inner model theory has become a sophisticated and powerful apparatus which yields results that are among the deepest in set theory
Strengthening democracy through bottom-up deliberation: An assessment of the internal legitimacy of the G1000 project
From the beginning of the 1990’s onwards, political analysts in all Western European countries discovered the contours of what they thought to be a widespread crisis of democracy. The alleged decline of political trust and public participation, and the rise of electoral volatility pointed out that the gap between politicians and citizens had never been wider. This political climate characterized by a deep-rooted crisis of democratic legitimacy offered an excellent breeding ground for critical reflection on the role, shape and function of democracy in modern societies. It gave rise to a fruitful quest for new and innovative ways of governing a democracy.
It is in this turbulent period that the ideal of a deliberative democracy was coined (Dryzek 2000). A community of international scholars and philosophers, inspired by the work of Jürgen Habermas, became more and more convinced that a vibrant democracy is more than the aggregate of its individual citizens, and that democratic politics should be about more than merely voting. The quality of a democracy and the quality of democratic decisions, according to them, did not depend on the correct aggregation of individual preferences, but rather on the quality of the public debate that preceded the voting stage. Democratic decisions were thus no longer considered a function of mere compliance with aggregation rules. Instead, they are determined by extensive argumentation about political choices before voting on them.
Because of its strong focus of public involvement in politics, this deliberative model of democracy started out in life as a theory of legitimacy (Benhabib 1996; Cohen 2002; Dryzek 2001; Parkinson 2006). By including everyone who is affected by a decision in the process leading to that decision, deliberation has important political merits: it is capable of generating political decisions that receive broad public support, even when there is strong disagreement on the aims and values a polity should promote (Geenens & Tinnevelt 2007, p. 47). After all, talking about political issues allows citizens to hear other perspectives to a problem and to see their own perspectives represented in the final decision. As such, deliberative democracy seeks to score high on input, throughput and output legitimacy.
However, deliberation’s beneficial effects do not come about easily. If deliberative democracy wants to contribute to increasing the legitimacy of the political system as a whole, it has to be legitimate in itself. In other words, deliberative events have to reflect the principles of legitimacy in their own functioning before their outcomes can generate legitimate political decisions. It is therefore crucial to assess the internal legitimacy of deliberative mini-publics before making claims about their contribution to the legitimacy of the political system as a whole.
In this paper, we set out to assess the internal legitimacy of one specific deliberative event, namely the G1000 project in Belgium (Caluwaerts & Reuchamps, 2012a). Our research question is therefore: to what extent does the G1000 live up to the criteria of input, throughput and output legitimacy? The G1000 project takes a particular place in the world of deliberative practice because it was not only grass roots in its process and its results, but also in its organization. Most deliberative events are introduced and funded by either public administrations or scientific institutions. The G1000 was rather considered a citizens’ initiative from its very inception. All of the organizers of the event were volunteers, and all of the funds were gathered using crowd funding. So instead of a scientific experiment, the G1000 was more of a democratic experiment by, through, and for citizens. This grass-root structure makes it a very interesting case for students of legitimacy, because as we will see later on it situated at the heart of the democratic trade-off between input and output legitimacy
A summary of the 2012 JHU CLSP workshop on zero resource speech technologies and models of early language acquisition
We summarize the accomplishments of a multi-disciplinary workshop exploring the computational and scientific issues surrounding zero resource (unsupervised) speech technologies and related models of early language acquisition. Centered around the tasks of phonetic and lexical discovery, we consider unified evaluation metrics, present two new approaches for improving speaker independence in the absence of supervision, and evaluate the application of Bayesian word segmentation algorithms to automatic subword unit tokenizations. Finally, we present two strategies for integrating zero resource techniques into supervised settings, demonstrating the potential of unsupervised methods to improve mainstream technologies.</p
Simulation Tools for Heavy-Ion Tracking and Collimation
The LHC collimation system, which protects the LHC hardware from undesired beam loss, is less efficient with heavy-ion beams than with proton beams due to fragmentation into other nuclides inside the LHC collimators. Reliable simulation tools are required to estimate critical losses of particles scattered out of the collimation system which may quench the superconducting LHC magnets. Tracking simulations need to take into account the mass and charge of the tracked ions. Heavy-ions can be tracked as protons with ion-equivalent rigidity using proton tracking tools like SixTrack, as used in the simulation tool STIER. Alternatively, new tracking maps can be derived from a generalized accelerator Hamiltonian and implemented in SixTrack. This approach is used in the new tool heavy-ion SixTrack. When coupled to particle-matter interaction tools, they can be used to simulate the collimation efficiency with heavy-ion beams. Both simulation tools are presented and compared to measurements