18,371 research outputs found

    Future wishes and constraints from the experiments at the LHC for the Proton-Proton programme

    Full text link
    Hosting six different experiments at four different interaction points and widely different requirements for the running conditions, the LHC machine has been faced with a long list of challenges in the first three years of luminosity production (2010-2012, Run 1), many of which were potentially capable of limiting the performance due to instabilities resulting from the extremely high bunch brightness. Nonetheless, LHC met the challenges and performed extremely well at high efficiency and routinely with beam brightness at twice the design, well over one-third of the time in collision for physics, average luminosity lifetimes in excess of 10 h and extremely good background conditions in the experiments. While the experimental running configurations remain largely the same for the future high luminosity proton-proton operational mode, the energy and the luminosity should increase significantly, making a prior assessment of related beam-beam effects extremely important to guarantee high performance. Of particular interest is the need for levelling the luminosity individually in the different experiments. Luminosity control as the more general version of 'levelling' has been at the heart of the success for LHCb, and to a large extent also for ALICE, throughout Run 1. With the increasing energy and potential luminosity, luminosity control may be required by all the experiments at some point in the future as a means of controlling the pileup conditions and trigger rates, but possibly also as a way of optimizing the integrated luminosity. This paper reviews the various motivations and possibilities for controlling the luminosity from the experiments' point of view, and outlines the future running conditions and desiderata for the experiments as they are viewed currently, with the aim of giving guidelines for different options.Comment: 10 pages, contribution to the ICFA Mini-Workshop on Beam-Beam Effects in Hadron Colliders, CERN, Geneva, Switzerland, 18-22 Mar 201

    Comics, robots, fashion and programming: outlining the concept of actDresses

    Get PDF
    This paper concerns the design of physical languages for controlling and programming robotic consumer products. For this purpose we explore basic theories of semiotics represented in the two separate fields of comics and fashion, and how these could be used as resources in the development of new physical languages. Based on these theories, the design concept of actDresses is defined, and supplemented by three example scenarios of how the concept can be used for controlling, programming, and predicting the behaviour of robotic systems

    Nonsense and the Freedom of Speech: What Meaning Means for the First Amendment

    Get PDF
    A great deal of everyday expression is, strictly speaking, nonsense. But courts and scholars have done little to consider whether or why such meaningless speech, like nonrepresentational art, falls within “the freedom of speech.” If, as many suggest, meaning is what separates speech from sound and expression from conduct, then the constitutional case for nonsense is complicated. And because nonsense is so common, the case is also important — artists like Lewis Carroll and Jackson Pollock are not the only putative “speakers” who should be concerned about the outcome. This Article is the first to explore thoroughly the relationship between nonsense and the freedom of speech; in doing so, it suggests ways to determine what “meaning” means for First Amendment purposes. The Article begins by demonstrating the scope and constitutional salience of meaningless speech, showing that nonsense is multifarious, widespread, and sometimes intertwined with traditional First Amendment values like autonomy, the marketplace of ideas, and democracy. The second part of the Article argues that exploring nonsense can illuminate the meaning of meaning itself. This, too, is an important task, for although free speech discourse often relies on the concept of meaning to chart the Amendment’s scope, courts and scholars have done relatively little to establish what it entails. Analytic philosophers, meanwhile, have spent the past century doing little else. Their efforts — echoes of which can already be heard in First Amendment doctrine — suggest that free speech doctrine is best served by finding meaning in the way words are used, rather than in their relationship to extra-linguistic concepts

    Radiocarbon evidence for the pace of the M-/L-PPNB transition in the 8th millennium BC south-west Asia

    Get PDF
    The transition from the Middle to Late Pre-Pottery Neolithic B (PPNB) happened throughout southwest Asia in the mid-8th millennium cal BC. It entailed the abandonment of a number of sites, rapid growth of others, as well as the wide spread of morphologically domestic caprines. What remains an unknown is how rapid these processes were in real time. Over the period when the transition was taking place, the calibration curve has two shallow sections divided by a sudden drop, which for many of the older dates creates an illusion of a sudden cultural break around 7600–7500 cal BC. Yet a more detailed study presented in this paper suggests that the transition event could have been spread over a more extended period of time. This, however, is still far from certain due to risks of old wood effects and complexities of site formation

    Political Media Contests and Confirmatory Bias

    Get PDF
    This paper models a two-period media contest between two political candidates campaigning to win an election. Two main cases are examined. In the first case voters behave as unbiased Bayesian updaters when assessing political information. The second case considers voters suffering from confirmatory bias. In the first case I find that candidates spend equal amounts of their campaign funds in both periods in equilibrium. In the second case, candidates spend more in period one. A candidate with better media access (in period one) does, however, better if voters suffer from confirmatory bias than if they do not.Election campaigns; voting behavior; confirmatory bias. Election campaigns; voting behavior; confirmatory bias. Election campaigns; voting behavior; confirmatory bias. Election campaigns; voting behavior; confirmatory bias

    Massive MU-MIMO-OFDM Downlink with One-Bit DACs and Linear Precoding

    Full text link
    Massive multiuser (MU) multiple-input multiple- output (MIMO) is foreseen to be a key technology in future wireless communication systems. In this paper, we analyze the downlink performance of an orthogonal frequency division multiplexing (OFDM)-based massive MU-MIMO system in which the base station (BS) is equipped with 1-bit digital-to-analog converters (DACs). Using Bussgang's theorem, we characterize the performance achievable with linear precoders (such as maximal-ratio transmission and zero forcing) in terms of bit error rate (BER). Our analysis accounts for the possibility of oversampling the time-domain transmit signal before the DACs. We further develop a lower bound on the information-theoretic sum-rate throughput achievable with Gaussian inputs. Our results suggest that the performance achievable with 1-bit DACs in a massive MU-MIMO-OFDM downlink are satisfactory provided that the number of BS antennas is sufficiently large

    Linear Precoding with Low-Resolution DACs for Massive MU-MIMO-OFDM Downlink

    Full text link
    We consider the downlink of a massive multiuser (MU) multiple-input multiple-output (MIMO) system in which the base station (BS) is equipped with low-resolution digital-to-analog converters (DACs). In contrast to most existing results, we assume that the system operates over a frequency-selective wideband channel and uses orthogonal frequency division multiplexing (OFDM) to simplify equalization at the user equipments (UEs). Furthermore, we consider the practically relevant case of oversampling DACs. We theoretically analyze the uncoded bit error rate (BER) performance with linear precoders (e.g., zero forcing) and quadrature phase-shift keying using Bussgang's theorem. We also develop a lower bound on the information-theoretic sum-rate throughput achievable with Gaussian inputs, which can be evaluated in closed form for the case of 1-bit DACs. For the case of multi-bit DACs, we derive approximate, yet accurate, expressions for the distortion caused by low-precision DACs, which can be used to establish lower bounds on the corresponding sum-rate throughput. Our results demonstrate that, for a massive MU-MIMO-OFDM system with a 128-antenna BS serving 16 UEs, only 3--4 DAC bits are required to achieve an uncoded BER of 10^-4 with a negligible performance loss compared to the infinite-resolution case at the cost of additional out-of-band emissions. Furthermore, our results highlight the importance of taking into account the inherent spatial and temporal correlations caused by low-precision DACs
    corecore