345,123 research outputs found

    Coherent Multibeam Arrays Using a Cold Aperture Stop

    Get PDF
    To increase the mapping speed of a given area-of-sky, multibeam heterodyne arrays may be used. Since typical heterodyne arrays are spatially arranged sparsely at approximately 4· Nyquist sampling (i.e., two full-width-half-maximum beam widths), many pointings are required to sample fully the area of interest. A cold aperture stop may be used to increase the packing density of the detectors, which results in a denser instantaneous spatial sampling on-sky. Combining reimaging optics with the cold stop, good aperture efficiency can be obtained. As expected, however, a significant amount of power is truncated at the stop and the surrounding baffling. We analyze the consequence of this power truncation and explore the possibility of using this layout for coherent detection as a multibeam feed. We show that for a fixed area-of-sky, a "twice-Nyquist" spatial sampling arrangement may improve the normalized point source mapping speed when the system noise temperature is dominated by background or atmospheric contribution

    The OS* Algorithm: a Joint Approach to Exact Optimization and Sampling

    Full text link
    Most current sampling algorithms for high-dimensional distributions are based on MCMC techniques and are approximate in the sense that they are valid only asymptotically. Rejection sampling, on the other hand, produces valid samples, but is unrealistically slow in high-dimension spaces. The OS* algorithm that we propose is a unified approach to exact optimization and sampling, based on incremental refinements of a functional upper bound, which combines ideas of adaptive rejection sampling and of A* optimization search. We show that the choice of the refinement can be done in a way that ensures tractability in high-dimension spaces, and we present first experiments in two different settings: inference in high-order HMMs and in large discrete graphical models.Comment: 21 page

    Item-by-item sampling for promotional purposes

    Get PDF
    This is an accepted manuscript of an article accepted for publication by Taylor & Francis in Quality Technology & Quantitative Management on 7 June 2017, available online at doi: http://www.tandfonline.com/doi/abs/10.1080/16843703.2017.1335494In this paper we present a method for sampling items that are checked on a pass/fail basis, with a view to a statement being made about the success/failure rate for the purposes of promoting an organisation’s product/service to potential clients/customers. Attention is paid to the appropriate use of statistical phrases for the statements and this leads to the use of Bayesian credible intervals, thus it exceeds what can achieved with standard acceptance sampling techniques. The hypergeometric distribution is used to calculate successive stopping rules so that the resources used for sampling can be minimised. Extensions to the sampling procedure are considered to allow the potential for stronger and weaker statements to be made as sampling progresses. The relationship between the true error rate and the probabilities of making correct statements is discussed.Peer reviewedFinal Accepted Versio

    Security Games with Information Leakage: Modeling and Computation

    Full text link
    Most models of Stackelberg security games assume that the attacker only knows the defender's mixed strategy, but is not able to observe (even partially) the instantiated pure strategy. Such partial observation of the deployed pure strategy -- an issue we refer to as information leakage -- is a significant concern in practical applications. While previous research on patrolling games has considered the attacker's real-time surveillance, our settings, therefore models and techniques, are fundamentally different. More specifically, after describing the information leakage model, we start with an LP formulation to compute the defender's optimal strategy in the presence of leakage. Perhaps surprisingly, we show that a key subproblem to solve this LP (more precisely, the defender oracle) is NP-hard even for the simplest of security game models. We then approach the problem from three possible directions: efficient algorithms for restricted cases, approximation algorithms, and heuristic algorithms for sampling that improves upon the status quo. Our experiments confirm the necessity of handling information leakage and the advantage of our algorithms

    Variations in Integrated Galactic Initial Mass Functions due to Sampling Method and Cluster Mass Function

    Full text link
    [abridged] Stars are thought to be formed predominantly in clusters. The clusters are formed following a cluster initial mass function (CMF) similar to the stellar initial mass function (IMF). Both the IMF and the CMF favour low-mass objects. The numerous low-mass clusters will lack high mass stars. If the integrated galactic initial mass function originates from stars formed in clusters, the IGIMF could be steeper than the IMF. We investigate how well constrained this steepening is and how it depends on the choice of sampling method and CMF. We compare analytic sampling to several implementations of random sampling of the IMF, and different CMFs. We implement different IGIMFs into GALEV to obtain colours and metallicities for galaxies. Choosing different ways of sampling the IMF results in different IGIMFs. Depending on the lower cluster mass limit and the slope of the cluster mass function, the steepening varies between very strong and negligible. We find the size of the effect is continuous as a function of the power-law slope of the CMF, if the CMF extends to masses smaller than the maximum stellarmass. The number of O-stars detected by GAIA might help in judging on the importance of the IGIMF effect. The impact of different IGIMFs on integrated galaxy photometry is small, within the intrinsic scatter of observed galaxies. Observations of gas fractions and metallicities could rule out at least the most extreme sampling methods. As we still do not understand the details of star formation, one sampling method cannot be favoured over another. Also, the CMF at very low cluster masses is not well constrained observationally. These uncertainties need to be taken into account when using an IGIMF, with severe implications for galaxy evolution models and interpretations of galaxy observations.Comment: Resubmitted to A&A, 14 pages, 9 Figure

    Development and implementation of a LabVIEW based SCADA system for a meshed multi-terminal VSC-HVDC grid scaled platform

    Get PDF
    This project is oriented to the development of a Supervisory, Control and Data Acquisition (SCADA) software to control and supervise electrical variables from a scaled platform that represents a meshed HVDC grid employing National Instruments hardware and LabVIEW logic environment. The objective is to obtain real time visualization of DC and AC electrical variables and a lossless data stream acquisition. The acquisition system hardware elements have been configured, tested and installed on the grid platform. The system is composed of three chassis, each inside of a VSC terminal cabinet, with integrated Field-Programmable Gate Arrays (FPGAs), one of them connected via PCI bus to a local processor and the rest too via Ethernet through a switch. Analogical acquisition modules were A/D conversion takes place are inserted into the chassis. A personal computer is used as host, screen terminal and storing space. There are two main access modes to the FPGAs through the real time system. It has been implemented a Scan mode VI to monitor all the grid DC signals and a faster FPGA access mode VI to monitor one converter AC and DC values. The FPGA application consists of two tasks running at different rates and a FIFO has been implemented to communicate between them without data loss. Multiple structures have been tested on the grid platform and evaluated, ensuring the compliance of previously established specifications, such as sampling and scanning rate, screen refreshment or possible data loss. Additionally a turbine emulator was implemented and tested in Labview for further testing

    The empirical replicability of task-based fMRI as a function of sample size

    Get PDF
    Replicating results (i.e. obtaining consistent results using a new independent dataset) is an essential part of good science. As replicability has consequences for theories derived from empirical studies, it is of utmost importance to better understand the underlying mechanisms influencing it. A popular tool for non-invasive neuroimaging studies is functional magnetic resonance imaging (fMRI). While the effect of underpowered studies is well documented, the empirical assessment of the interplay between sample size and replicability of results for task-based fMRI studies remains limited. In this work, we extend existing work on this assessment in two ways. Firstly, we use a large database of 1400 subjects performing four types of tasks from the IMAGEN project to subsample a series of independent samples of increasing size. Secondly, replicability is evaluated using a multi-dimensional framework consisting of 3 different measures: (un)conditional test-retest reliability, coherence and stability. We demonstrate not only a positive effect of sample size, but also a trade-off between spatial resolution and replicability. When replicability is assessed voxelwise or when observing small areas of activation, a larger sample size than typically used in fMRI is required to replicate results. On the other hand, when focussing on clusters of voxels, we observe a higher replicability. In addition, we observe variability in the size of clusters of activation between experimental paradigms or contrasts of parameter estimates within these
    • …
    corecore