10,545 research outputs found

    Overlap Removal of Dimensionality Reduction Scatterplot Layouts

    Full text link
    Dimensionality Reduction (DR) scatterplot layouts have become a ubiquitous visualization tool for analyzing multidimensional data items with presence in different areas. Despite its popularity, scatterplots suffer from occlusion, especially when markers convey information, making it troublesome for users to estimate items' groups' sizes and, more importantly, potentially obfuscating critical items for the analysis under execution. Different strategies have been devised to address this issue, either producing overlap-free layouts, lacking the powerful capabilities of contemporary DR techniques in uncover interesting data patterns, or eliminating overlaps as a post-processing strategy. Despite the good results of post-processing techniques, the best methods typically expand or distort the scatterplot area, thus reducing markers' size (sometimes) to unreadable dimensions, defeating the purpose of removing overlaps. This paper presents a novel post-processing strategy to remove DR layouts' overlaps that faithfully preserves the original layout's characteristics and markers' sizes. We show that the proposed strategy surpasses the state-of-the-art in overlap removal through an extensive comparative evaluation considering multiple different metrics while it is 2 or 3 orders of magnitude faster for large datasets.Comment: 11 pages and 9 figure

    Status and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond

    Full text link
    In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.Comment: 44 pages. 1 of USQCD whitepapers

    Shape reconstruction from gradient data

    Full text link
    We present a novel method for reconstructing the shape of an object from measured gradient data. A certain class of optical sensors does not measure the shape of an object, but its local slope. These sensors display several advantages, including high information efficiency, sensitivity, and robustness. For many applications, however, it is necessary to acquire the shape, which must be calculated from the slopes by numerical integration. Existing integration techniques show drawbacks that render them unusable in many cases. Our method is based on approximation employing radial basis functions. It can be applied to irregularly sampled, noisy, and incomplete data, and it reconstructs surfaces both locally and globally with high accuracy.Comment: 16 pages, 5 figures, zip-file, submitted to Applied Optic

    HYPERION: An open-source parallelized three-dimensional dust continuum radiative transfer code

    Full text link
    HYPERION is a new three-dimensional dust continuum Monte-Carlo radiative transfer code that is designed to be as generic as possible, allowing radiative transfer to be computed through a variety of three-dimensional grids. The main part of the code is problem-independent, and only requires an arbitrary three-dimensional density structure, dust properties, the position and properties of the illuminating sources, and parameters controlling the running and output of the code. HYPERION is parallelized, and is shown to scale well to thousands of processes. Two common benchmark models for protoplanetary disks were computed, and the results are found to be in excellent agreement with those from other codes. Finally, to demonstrate the capabilities of the code, dust temperatures, SEDs, and synthetic multi-wavelength images were computed for a dynamical simulation of a low-mass star formation region. HYPERION is being actively developed to include new features, and is publicly available (http://www.hyperion-rt.org).Comment: Accepted for publication in Astronomy & Astrophysics. HYPERION is being prepared for release at the start of 2012, but you can already sign up to the mailing list at http://www.hyperion-rt.org to be informed once it is available for downloa

    Underground Neutrino Detectors for Particle and Astroparticle Science: the Giant Liquid Argon Charge Imaging ExpeRiment (GLACIER)

    Full text link
    The current focus of the CERN program is the Large Hadron Collider (LHC), however, CERN is engaged in long baseline neutrino physics with the CNGS project and supports T2K as recognized CERN RE13, and for good reasons: a number of observed phenomena in high-energy physics and cosmology lack their resolution within the Standard Model of particle physics; these puzzles include the origin of neutrino masses, CP-violation in the leptonic sector, and baryon asymmetry of the Universe. They will only partially be addressed at LHC. A positive measurement of sin22θ13>0.01\sin^22\theta_{13}>0.01 would certainly give a tremendous boost to neutrino physics by opening the possibility to study CP violation in the lepton sector and the determination of the neutrino mass hierarchy with upgraded conventional super-beams. These experiments (so called ``Phase II'') require, in addition to an upgraded beam power, next generation very massive neutrino detectors with excellent energy resolution and high detection efficiency in a wide neutrino energy range, to cover 1st and 2nd oscillation maxima, and excellent particle identification and π0\pi^0 background suppression. Two generations of large water Cherenkov detectors at Kamioka (Kamiokande and Super-Kamiokande) have been extremely successful. And there are good reasons to consider a third generation water Cherenkov detector with an order of magnitude larger mass than Super-Kamiokande for both non-accelerator (proton decay, supernovae, ...) and accelerator-based physics. On the other hand, a very massive underground liquid Argon detector of about 100 kton could represent a credible alternative for the precision measurements of ``Phase II'' and aim at significantly new results in neutrino astroparticle and non-accelerator-based particle physics (e.g. proton decay).Comment: 31 pages, 14 figure

    The Sensing Capacity of Sensor Networks

    Full text link
    This paper demonstrates fundamental limits of sensor networks for detection problems where the number of hypotheses is exponentially large. Such problems characterize many important applications including detection and classification of targets in a geographical area using a network of sensors, and detecting complex substances with a chemical sensor array. We refer to such applications as largescale detection problems. Using the insight that these problems share fundamental similarities with the problem of communicating over a noisy channel, we define a quantity called the sensing capacity and lower bound it for a number of sensor network models. The sensing capacity expression differs significantly from the channel capacity due to the fact that a fixed sensor configuration encodes all states of the environment. As a result, codewords are dependent and non-identically distributed. The sensing capacity provides a bound on the minimal number of sensors required to detect the state of an environment to within a desired accuracy. The results differ significantly from classical detection theory, and provide an ntriguing connection between sensor networks and communications. In addition, we discuss the insight that sensing capacity provides for the problem of sensor selection.Comment: Submitted to IEEE Transactions on Information Theory, November 200

    Radiation-Induced Error Criticality in Modern HPC Parallel Accelerators

    Get PDF
    In this paper, we evaluate the error criticality of radiation-induced errors on modern High-Performance Computing (HPC) accelerators (Intel Xeon Phi and NVIDIA K40) through a dedicated set of metrics. We show that, as long as imprecise computing is concerned, the simple mismatch detection is not sufficient to evaluate and compare the radiation sensitivity of HPC devices and algorithms. Our analysis quantifies and qualifies radiation effects on applications’ output correlating the number of corrupted elements with their spatial locality. Also, we provide the mean relative error (dataset-wise) to evaluate radiation-induced error magnitude. We apply the selected metrics to experimental results obtained in various radiation test campaigns for a total of more than 400 hours of beam time per device. The amount of data we gathered allows us to evaluate the error criticality of a representative set of algorithms from HPC suites. Additionally, based on the characteristics of the tested algorithms, we draw generic reliability conclusions for broader classes of codes. We show that arithmetic operations are less critical for the K40, while Xeon Phi is more reliable when executing particles interactions solved through Finite Difference Methods. Finally, iterative stencil operations seem the most reliable on both architectures.This work was supported by the STIC-AmSud/CAPES scientific cooperation program under the EnergySFE research project grant 99999.007556/2015-02, EU H2020 Programme, and MCTI/RNP-Brazil under the HPC4E Project, grant agreement n° 689772. Tested K40 boards were donated thanks to Steve Keckler, Timothy Tsai, and Siva Hari from NVIDIA.Postprint (author's final draft
    corecore