48 research outputs found

    GPU voxelization

    Get PDF
    Given a triangulated model, we want to identify which voxels of a voxel grid are intersected by the boundary of this model. There are other branch of implemented voxelizations, in which not only the boundary is detected, also the interior of the model. Often these voxels are cubes. But it is not a restriction, there are other presented techniques in which the voxel grid is the view frustum, and voxels are prisms. There are di erent kind of voxelizations depending on the rasterization behavior. Approximate rasterization is the standard way of rasterizing fragments in GPU. It means only those fragments whose center lies inside the projection of the primitive are identi ed. Conservative rasterization (Hasselgren et al. , 2005) involves a dilation operation over the primitive. This is done in GPU to ensure that in the rasterization stage all the intersected fragments have its center inside the dilated primitive. However, this can produce spurious fragments, non-intersected pixels. Exact voxelization detects only those voxels that we need.

    GPU voxelization

    Get PDF
    Given a triangulated model, we want to identify which voxels of a voxel grid are intersected by the boundary of this model. There are other branch of implemented voxelizations, in which not only the boundary is detected, also the interior of the model. Often these voxels are cubes. But it is not a restriction, there are other presented techniques in which the voxel grid is the view frustum, and voxels are prisms. There are di erent kind of voxelizations depending on the rasterization behavior. Approximate rasterization is the standard way of rasterizing fragments in GPU. It means only those fragments whose center lies inside the projection of the primitive are identi ed. Conservative rasterization (Hasselgren et al. , 2005) involves a dilation operation over the primitive. This is done in GPU to ensure that in the rasterization stage all the intersected fragments have its center inside the dilated primitive. However, this can produce spurious fragments, non-intersected pixels. Exact voxelization detects only those voxels that we need.

    Dynamic configuration of partitioning in spark applications

    Get PDF
    Spark has become one of the main options for large-scale analytics running on top of shared-nothing clusters. This work aims to make a deep dive into the parallelism configuration and shed light on the behavior of parallel spark jobs. It is motivated by the fact that running a Spark application on all the available processors does not necessarily imply lower running time, while may entail waste of resources. We first propose analytical models for expressing the running time as a function of the number of machines employed. We then take another step, namely to present novel algorithms for configuring dynamic partitioning with a view to minimizing resource consumption without sacrificing running time beyond a user-defined limit. The problem we target is NP-hard. To tackle it, we propose a greedy approach after introducing the notions of dependency graphs and of the benefit from modifying the degree of partitioning at a stage; complementarily, we investigate a randomized approach. Our polynomial solutions are capable of judiciously use the resources that are potentially at user's disposal and strike interesting trade-offs between running time and resource consumption. Their efficiency is thoroughly investigated through experiments based on real execution data.Peer ReviewedPostprint (author's final draft

    Spark deployment and performance evaluation on the MareNostrum supercomputer

    Get PDF
    In this paper we present a framework to enable data-intensive Spark workloads on MareNostrum, a petascale supercomputer designed mainly for compute-intensive applications. As far as we know, this is the first attempt to investigate optimized deployment configurations of Spark on a petascale HPC setup. We detail the design of the framework and present some benchmark data to provide insights into the scalability of the system. We examine the impact of different configurations including parallelism, storage and networking alternatives, and we discuss several aspects in executing Big Data workloads on a computing system that is based on the compute-centric paradigm. Further, we derive conclusions aiming to pave the way towards systematic and optimized methodologies for fine-tuning data-intensive application on large clusters emphasizing on parallelism configurations.Peer ReviewedPostprint (author's final draft

    CONTEXTUALIZAÇÃO SOCIOECONÔMICA E POLÍTICA DO SURGIMENTO E DESENVOVIMENTO DOS PTRCs NA AMÉRICA LATINA E CARIBE

    Get PDF
    Este trabalho resulta de um Estudo Exploratório acerca dos Programas de Transferência de Renda Condicionada (PTRCs) em desenvolvimento na região da América Latina e Caribe. Objetiva contextualizar a emergênciae o desenvolvimento dos PTRCs na região em foco, abordando, para tanto, os fatores de ordem econômica, social e político-ideológica que determinaram a inclusão de tais programas nos Sistemas de Proteção Social da grande maioria dos países latino-americanos a partir dos anos 1990.Palavras-chave: Contextualização, Programas de Transferência de Renda Condicionada, América Latina e Caribe.SOCIO-ECONOMIC AND POLITICAL CONTEXTUALIZATION OF THE EMERGENCE AND DEVELOPMENT OF PTRCs IN LATIN AMERICA AND CARIBBEANAbstract: This work is a result of an Exploratory Study about the Programs of Conditioned Income Transfer in course in Latin America and Caribbean. The main target is contextualize the emergence and development of the PTRCs on the focus region, addressing economic, social and ideological-political issues which were fundamental to include those programs onthe Social Protection Systems on the vast majority of countries of Latin America since the 1990s.Keywords: Contextualization, Program of Conditioned Income Transfer, Latin America and Caribbean

    Revisión sistemática de evaluaciones económicas de los sistemas de telemonitorización en los marcapasos.

    Get PDF
    Introducción y objetivos: En la última década, la telemedicina aplicada a la monitorización de marcapasos cardiacos ha experimentado un extraordinario crecimiento. Se desconoce si esta tecnología tiene una eficiencia diferente de la convencional. El objetivo del estudio es realizar una revisión sistemática analizando la evidencia disponible con respecto al consumo de recursos y los resultados en salud en ambas modalidades de seguimiento. Métodos: La búsqueda se realizó en 11 bases de datos y se incluyeron estudios publicados hasta noviembre de 2014. Los criterios de inclusión fueron: a) diseño experimental u observacional; b) estudios basados en evaluaciones económicas completas; c) pacientes con marcapasos, y d) telemonitorización comparada con la modalidad hospitalaria.post-print284 K

    Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in √s = 7 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for the direct production of charginos and neutralinos in final states with three electrons or muons and missing transverse momentum is presented. The analysis is based on 4.7 fb−1 of proton–proton collision data delivered by the Large Hadron Collider and recorded with the ATLAS detector. Observations are consistent with Standard Model expectations in three signal regions that are either depleted or enriched in Z-boson decays. Upper limits at 95% confidence level are set in R-parity conserving phenomenological minimal supersymmetric models and in simplified models, significantly extending previous results

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements

    Search for supersymmetry at √s=13 TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector

    Get PDF
    A search for strongly produced supersymmetric particles is conducted using signatures involving multiple energetic jets and either two isolated leptons (e or μ μ) with the same electric charge or at least three isolated leptons. The search also utilises b-tagged jets, missing transverse momentum and other observables to extend its sensitivity. The analysis uses a data sample of proton–proton collisions at √s=13 TeV recorded with the ATLAS detector at the Large Hadron Collider in 2015 corresponding to a total integrated luminosity of 3.2 fb −1. No significant excess over the Standard Model expectation is observed. The results are interpreted in several simplified supersymmetric models and extend the exclusion limits from previous searches. In the context of exclusive production and simplified decay modes, gluino masses are excluded at 95% 95% confidence level up to 1.1–1.3 TeV for light neutralinos (depending on the decay channel), and bottom squark masses are also excluded up to 540 GeV. In the former scenarios, neutralino masses are also excluded up to 550–850 GeV for gluino masses around 1 TeV

    Search for supersymmetry in events with large missing transverse momentum, jets, and at least one tau lepton in 20 fb−1 of √s=8 TeV proton-proton collision data with the ATLAS detector

    Get PDF
    A search for supersymmetry (SUSY) in events with large missing transverse momentum, jets, at least one hadronically decaying tau lepton and zero or one additional light leptons (electron/muon), has been performed using 20.3fb−1 of proton-proton collision data at √s= 8 TeV recorded with the ATLAS detector at the Large Hadron Collider. No excess above the Standard Model background expectation is observed in the various signal regions and 95% confidence level upper limits on the visible cross section for new phenomena are set. The results of the analysis are interpreted in several SUSY scenarios, significantly extending previous limits obtained in the same final states. In the framework of minimal gauge-mediated SUSY breaking models, values of the SUSY breaking scale Λ below 63 TeV are excluded, independently of tan β. Exclusion limits are also derived for an mSUGRA/CMSSM model, in both the R-parity-conserving and R-parity-violating case. A further interpretation is presented in a framework of natural gauge mediation, in which the gluino is assumed to be the only light coloured sparticle and gluino masses below 1090 GeV are excluded
    corecore