12,086 research outputs found

    Recent Advances in Graph Partitioning

    Full text link
    We survey recent trends in practical algorithms for balanced graph partitioning together with applications and future research directions

    Analysis of variance--why it is more important than ever

    Full text link
    Analysis of variance (ANOVA) is an extremely important method in exploratory and confirmatory data analysis. Unfortunately, in complex problems (e.g., split-plot designs), it is not always easy to set up an appropriate ANOVA. We propose a hierarchical analysis that automatically gives the correct ANOVA comparisons even in complex scenarios. The inferences for all means and variances are performed under a model with a separate batch of effects for each row of the ANOVA table. We connect to classical ANOVA by working with finite-sample variance components: fixed and random effects models are characterized by inferences about existing levels of a factor and new levels, respectively. We also introduce a new graphical display showing inferences about the standard deviations of each batch of effects. We illustrate with two examples from our applied data analysis, first illustrating the usefulness of our hierarchical computations and displays, and second showing how the ideas of ANOVA are helpful in understanding a previously fit hierarchical model.Comment: This paper discussed in: [math.ST/0508526], [math.ST/0508527], [math.ST/0508528], [math.ST/0508529]. Rejoinder in [math.ST/0508530

    Problems on q-Analogs in Coding Theory

    Full text link
    The interest in qq-analogs of codes and designs has been increased in the last few years as a consequence of their new application in error-correction for random network coding. There are many interesting theoretical, algebraic, and combinatorial coding problems concerning these q-analogs which remained unsolved. The first goal of this paper is to make a short summary of the large amount of research which was done in the area mainly in the last few years and to provide most of the relevant references. The second goal of this paper is to present one hundred open questions and problems for future research, whose solution will advance the knowledge in this area. The third goal of this paper is to present and start some directions in solving some of these problems.Comment: arXiv admin note: text overlap with arXiv:0805.3528 by other author

    Multilevel Bayesian framework for modeling the production, propagation and detection of ultra-high energy cosmic rays

    Full text link
    Ultra-high energy cosmic rays (UHECRs) are atomic nuclei with energies over ten million times energies accessible to human-made particle accelerators. Evidence suggests that they originate from relatively nearby extragalactic sources, but the nature of the sources is unknown. We develop a multilevel Bayesian framework for assessing association of UHECRs and candidate source populations, and Markov chain Monte Carlo algorithms for estimating model parameters and comparing models by computing, via Chib's method, marginal likelihoods and Bayes factors. We demonstrate the framework by analyzing measurements of 69 UHECRs observed by the Pierre Auger Observatory (PAO) from 2004-2009, using a volume-complete catalog of 17 local active galactic nuclei (AGN) out to 15 megaparsecs as candidate sources. An early portion of the data ("period 1," with 14 events) was used by PAO to set an energy cut maximizing the anisotropy in period 1; the 69 measurements include this "tuned" subset, and subsequent "untuned" events with energies above the same cutoff. Also, measurement errors are approximately summarized. These factors are problematic for independent analyses of PAO data. Within the context of "standard candle" source models (i.e., with a common isotropic emission rate), and considering only the 55 untuned events, there is no significant evidence favoring association of UHECRs with local AGN vs. an isotropic background. The highest-probability associations are with the two nearest, adjacent AGN, Centaurus A and NGC 4945. If the association model is adopted, the fraction of UHECRs that may be associated is likely nonzero but is well below 50%. Our framework enables estimation of the angular scale for deflection of cosmic rays by cosmic magnetic fields; relatively modest scales of ≈ ⁣3∘\approx\!3^{\circ} to 30∘30^{\circ} are favored. Models that assign a large fraction of UHECRs to a single nearby source (e.g., Centaurus A) are ruled out unless very large deflection scales are specified a priori, and even then they are disfavored. However, including the period 1 data alters the conclusions significantly, and a simulation study supports the idea that the period 1 data are anomalous, presumably due to the tuning. Accurate and optimal analysis of future data will likely require more complete disclosure of the data.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS654 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Multilevel RTS in proton irradiated CMOS image sensors manufactured in a deep submicron technology

    Get PDF
    A new automated method able to detect multilevel random telegraph signals (RTS) in pixel arrays and to extract their main characteristics is presented. The proposed method is applied to several proton irradiated pixel arrays manufactured using a 0.18um CMOS process dedicated to imaging. Despite the large proton energy range and the large fluence range used, similar exponential RTS amplitude distributions are observed. A mean maximum amplitude independent of displacement damage dose is extracted from these distributions and the number of RTS defects appears to scale well with total nonionizing energy loss. These conclusions allow the prediction of RTS amplitude distributions. The effect of electric field on RTS amplitude is also studied and no significant relation between applied bias and RTS amplitude is observed

    Analysis and design of a modular multilevel converter with trapezoidal modulation for medium and high voltage DC-DC transformers

    Get PDF
    Conventional dual active bridge topologies provide galvanic isolation and soft-switching over a reasonable operating range without dedicated resonant circuits. However, scaling the two-level dual active bridge to higher dc voltage levels is impeded by several challenges among which the high dv/dt stress on the coupling transformer insulation. Gating and thermal characteristics of series switch arrays add to the limitations. To avoid the use of standard bulky modular multilevel bridges, this paper analyzes an alternative modulation technique where staircase approximated trapezoidal voltage waveforms are produced; thus alleviating developed dv/dt stresses. Modular design is realized by the utilization of half-bridge chopper cells. Therefore, the analyzed converter is a modular multi-level converter operated in a new mode with no common-mode dc arm currents as well as reduced capacitor size, hence reduced cell footprint. Suitable switching patterns are developed and various design and operation aspects are studied. Soft switching characteristics will be shown to be comparable to those of the two-level dual active bridge. Experimental results from a scaled test rig validate the presented concept

    PT-Scotch: A tool for efficient parallel graph ordering

    Get PDF
    The parallel ordering of large graphs is a difficult problem, because on the one hand minimum degree algorithms do not parallelize well, and on the other hand the obtainment of high quality orderings with the nested dissection algorithm requires efficient graph bipartitioning heuristics, the best sequential implementations of which are also hard to parallelize. This paper presents a set of algorithms, implemented in the PT-Scotch software package, which allows one to order large graphs in parallel, yielding orderings the quality of which is only slightly worse than the one of state-of-the-art sequential algorithms. Our implementation uses the classical nested dissection approach but relies on several novel features to solve the parallel graph bipartitioning problem. Thanks to these improvements, PT-Scotch produces consistently better orderings than ParMeTiS on large numbers of processors
    • 

    corecore