33 research outputs found

    Gestión del alcance del proceso de excavación en roca necesaria para la construcción del sótano y cimentación del edificio “Taller 6”, bajo la metodología PMBOK 5ta edición numeral 5

    Get PDF
    Trabajo de investigaciónLa gestión del alcance de un proyecto incluye todos los procesos necesarios para garantizar que se realice con éxito, Debido a los últimos acontecimientos, en los que están involucrados los megaproyectos para el desarrollo de la infraestructura del país, se evidencia la falta de gestión del alcance, por tal motivo es necesario replantear los procesos que llevan a cabo los responsables de la planeación del proyecto. Se pudo identificar que los métodos tradicionales de construcción de edificios involucran la construcción de uno o varios sótanos no se desarrolla utilizando una herramienta práctica bajo los principios de la guía del PMBOK de su quinta edición en el capítulo 5.1 Generalidades 2 Marcos de referencia 3 Metodología 4 Productos por entregar 5 Resultados esperados e impacto 6 Conclusiones 7 BibliografíaEspecializaciónEspecialista en Gerencia de Obras Civile

    Gestión del conocimiento. Perspectiva multidisciplinaria. Volumen 9

    Get PDF
    El libro “Gestión del Conocimiento. Perspectiva Multidisciplinaria”, volumen 9, de la Colección Unión Global, es resultado de investigaciones. Los capítulos del libro, son resultados de investigaciones desarrolladas por sus autores. El libro es una publicación internacional, seriada, continua, arbitrada de acceso abierto a todas las áreas del conocimiento, que cuenta con el esfuerzo de investigadores de varios países del mundo, orientada a contribuir con procesos de gestión del conocimiento científico, tecnológico y humanístico que consoliden la transformación del conocimiento en diferentes escenarios, tanto organizacionales como universitarios, para el desarrollo de habilidades cognitivas del quehacer diario. La gestión del conocimiento es un camino para consolidar una plataforma en las empresas públicas o privadas, entidades educativas, organizaciones no gubernamentales, ya sea generando políticas para todas las jerarquías o un modelo de gestión para la administración, donde es fundamental articular el conocimiento, los trabajadores, directivos, el espacio de trabajo, hacia la creación de ambientes propicios para el desarrollo integral de las instituciones

    Gestión del conocimiento. Perspectiva multidisciplinaria. Volumen 10

    Get PDF
    El libro “Gestión del Conocimiento. Perspectiva Multidisciplinaria”, Volumen 10, de la Colección Unión Global, es resultado de investigaciones. Los capítulos del libro, son resultados de investigaciones desarrolladas por sus autores. El libro es una publicación internacional, seriada, continua, arbitrada de acceso abierto a todas las áreas del conocimiento, que cuenta con el esfuerzo de investigadores de varios países del mundo, orientada a contribuir con procesos de gestión del conocimiento científico, tecnológico y humanístico que consoliden la transformación del conocimiento en diferentes escenarios, tanto organizacionales como universitarios, para el desarrollo de habilidades cognitivas del quehacer diario. La gestión del conocimiento es un camino para consolidar una plataforma en las empresas públicas o privadas, entidades educativas, organizaciones no gubernamentales, ya sea generando políticas para todas las jerarquías o un modelo de gestión para la administración, donde es fundamental articular el conocimiento, los trabajadores, directivos, el espacio de trabajo, hacia la creación de ambientes propicios para el desarrollo integral de las instituciones

    Production of inclusive ϒ(1S) and ϒ(2S) in p–Pb collisions at √sNN = 5.02 TeV

    No full text
    We report on the production of inclusive Υ(1S) and Υ(2S) in p-Pb collisions at sNN−−−√=5.02 TeV at the LHC. The measurement is performed with the ALICE detector at backward (−4.46<ycms<−2.96) and forward (2.03<ycms<3.53) rapidity down to zero transverse momentum. The production cross sections of the Υ(1S) and Υ(2S) are presented, as well as the nuclear modification factor and the ratio of the forward to backward yields of Υ(1S). A suppression of the inclusive Υ(1S) yield in p-Pb collisions with respect to the yield from pp collisions scaled by the number of binary nucleon-nucleon collisions is observed at forward rapidity but not at backward rapidity. The results are compared to theoretical model calculations including nuclear shadowing or partonic energy loss effects

    Measurement of electrons from semileptonic heavy-flavor hadron decays in pp collisions at √s = 2.76 TeV

    No full text
    The pT-differential production cross section of electrons from semileptonic decays of heavy-flavor hadrons has been measured at mid-rapidity in proton-proton collisions at s√=2.76 TeV in the transverse momentum range 0.5 < pT < 12 GeV/c with the ALICE detector at the LHC. The analysis was performed using minimum bias events and events triggered by the electromagnetic calorimeter. Predictions from perturbative QCD calculations agree with the data within the theoretical and experimental uncertainties

    Beauty production in pp collisions at √s = 2.76 TeV measured via semi-electronic decays

    No full text
    The ALICE collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp collisions at s√= 2.76 TeV. Electrons not originating from semi-electronic decay of beauty hadrons are suppressed using the impact parameter of the corresponding tracks. The production cross section of beauty decay electrons is compared to the result obtained with an alternative method which uses the distribution of the azimuthal angle between heavy-flavour decay electrons and charged hadrons. Perturbative QCD calculations agree with the measured cross section within the experimental and theoretical uncertainties. The integrated visible cross section, σb→e=3.47±0.40(stat)+1.12−1.33(sys)±0.07(norm)μb, was extrapolated to full phase space using Fixed Order plus Next-to-Leading Log (FONLL) predictions to obtain the total bb¯ production cross section, σbb¯=130±15.1(stat)+42.1−49.8(sys)+3.4−3.1(extr)±2.5(norm)±4.4(BR)μb

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype

    DUNE Offline Computing Conceptual Design Report

    No full text
    International audienceThis document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype
    corecore