1,107 research outputs found
Diboson production at the LHC
These proceedings present an overview of the diboson production cross-section measurements and constraints on anomalous triple-gauge boson couplings performed by the ATLAS and CMS collaborations using proton-proton collisions produced at a centre-of-mass energy of √ s = 8 and 13 TeV at LHC. Results for all combinations of W, Z and γ gauge bosons are presented with emphasis on the new WZ production cross sections measured by ATLAS at √ s = 13 TeV and WW production cross section measured by CMS at √ s = 13 TeV. New constraints on anomalous triple-gauge couplings have been set by both experiments at 8 TeV.Peer Reviewe
Improving detection of asphalt distresses with deep learning-based diffusion model for intelligent road maintenance
Research on road infrastructure structural health monitoring is critical due to the increasing problem of deteriorated conditions. The traditional approach to pavement distress detection relies on human-based visual recognition, a time-consuming and labor-intensive method. While Deep Learning-based computer vision systems are the most promising approach, they face the challenges of reduced performance due to the scarcity of labeled data due, high annotation costs misaligned with engineering applications, and limited instances of minority defects. This paper introduces a novel generative diffusion model for data augmentation, creating synthetic images of rare defects. It also investigates methods to enhance image quality and reduce production time. Compared to Generative Adversarial Networks, the optimal configuration excels in reliability, quality and diversity. After incorporating synthetic images into the training of our pavement distress detector, YOLOv5, its mean average precision has been enhanced. This computer-aided system enhances recognition and labelling efficiency, promoting intelligent maintenance and repairs
An end-to-end computer vision system based on deep learning for pavement distress detection and quantification
The performance of deep learning-based computer vision systems for road infrastructure assessment is hindered by the scarcity of real-world, high-volume public datasets. Current research predominantly focuses on crack detection and segmentation, without devising end-to-end systems capable of effectively evaluating the most affected roads and assessing the out-of-sample performance. To address these limitations, this study proposes a public dataset with annotations of 7099 images and 13 types of defects, not only based on cracks, for the confrontation and development of deep learning models. These images are used to train and compare YOLOv5 sub-models based on pure detection efficiency, and standard object detection metrics, to select the optimum architecture. A novel post-processing filtering mechanism is then designed, which reduces the false positive detections by 20.5%. Additionally, a pavement condition index (ASPDI) is engineered for deep learning-based models to identify areas in need for immediate maintenance. To facilitate decision-making by road administrations, a software application is created, which integrates the ASPDI, geotagged images, and detections. This tool has allowed to detect two road sections in critical need of repair. The refined architecture is validated on open datasets, achieving mean average precision scores of 0.563 and 0.570 for RDD2022 and CPRI, respectivelyThis work was supported by the Ministry of Science and Innovation (ES) under Grant [TED2021–129749B-I00]
Enhancing pavement crack segmentation via semantic diffusion synthesis model for strategic road assessment
Computer-aided deep learning has significantly advanced road crack segmentation. However, supervised models face challenges due to limited annotated images. There is also a lack of emphasis on deriving pavement condition indices from predicted masks. This article introduces a novel semantic diffusion synthesis model that creates synthetic crack images from segmentation masks. The model is optimized in terms of architectural complexity, noise schedules, and condition scaling. The optimal architecture outperforms state-of-the-art semantic synthesis models across multiple benchmark datasets, demonstrating superior image quality assessment metrics. The synthetic frames augment these datasets, resulting in segmentation models with significantly improved efficiency. This approach enhances results without extensive data collection or annotation, addressing a key challenge in engineering. Finally, a refined pavement condition index has been developed for automated end-to-end defect detection systems, promoting more effective maintenance planning.This work has been co-financed by the Ministry of Science and Innovation (ES) through the State Plan for Scientific and Technical Research and Innovation 2021-23 under the project MAPSIA [TED2021-129749B–I00]. Also, by the Horizon Europe Research and Innovation Framework program of the European Union under the project LIAISON [101103698]
Una nueva herramienta digital para el estudio de los hábitos de consumo de vino
Trabajo presentado en el XV Congreso Nacional de Investigación Enológica, celebrado en Murcia (España), del 23 al 26 de mayo de 2022Este trabajo nace de la necesidad de implementar nuevas metodologías que permitan cuantificar de forma objetiva y más precisa el consumo de vino, en relación con las herramientas más comunes basadas en las Encuestas de Frecuencia de Consumo de Alimentos (FFQ) que asignan 100 mL de forma fija para una copa/vaso de vino. El objetivo de este trabajo se ha centrado en el desarrollo de una herramienta digital de análisis de imagen, basada en inteligencia artificial que permita cuantificar la cantidad de vino tinto servido en una copa/vaso a partir de una simple fotografía tomada con un teléfono móvil. Para la construcción de la herramienta, se ha creado un banco de imágenes con 24.305 fotografías de estudio considerando una serie de variables que incluyen distintos tipos de copas/vasos y volúmenes de vino tinto, distintas condiciones de luz, fondo de fotografía, distancia del objeto y ángulo focal. A partir de este banco, se ha desarrollado un modelo basado en una red neuronal convolucional (CNN) de regresión que permite predecir el volumen de vino tinto a partir de una fotografía de la copa/vaso que contiene el vino. La aplicación del modelo ha demostrado un rendimiento satisfactorio con un error absoluto medio en la medida de volumen de vino inferior a 10 mL. A partir de este primer modelo, el siguiente paso es su optimización y validación incorporando al mismo fotografías que recojan situaciones reales de consumo de vino, en el contexto de la dieta y el estilo de vida de distintos grupos de la población. Esperamos que esta nueva herramienta basada en el análisis de imagen supondrá un soporte importante para la recogida de información sobre dieta y hábitos de consumo de vino mucho más objetiva que la recogida mediante encuestas. También esperamos aportar datos más precisos sobre los hábitos individuales de consumo de vino en España
Open data provenance and reproducibility : a case study from publishing CMS open data
In this paper we present the latest CMS open data release published on the CERN Oopen Data portal. Samples of collision and simulated datasets were released together with detailed information about the data provenance. The associated data production chains cover the necessary computing environments, the configuration files and the computational procedures used in each data production step. We describe data curation techniques used to obtain and publish the data provenance information and we study the possibility of reproducing parts of the released data using the publicly available information. The present work demonstrates the usefulness of releasing selected samples of raw and primary data in order to fully ensure the completeness of information about the data production chain for the attention of general data scientists and other non-specialists interested in using particle physics data for education or research purposes.In this paper we present the latest CMS open data release published on the CERN Oopen Data portal. Samples of collision and simulated datasets were released together with detailed information about the data provenance. The associated data production chains cover the necessary computing environments, the configuration files and the computational procedures used in each data production step. We describe data curation techniques used to obtain and publish the data provenance information and we study the possibility of reproducing parts of the released data using the publicly available information. The present work demonstrates the usefulness of releasing selected samples of raw and primary data in order to fully ensure the completeness of information about the data production chain for the attention of general data scientists and other non-specialists interested in using particle physics data for education or research purposes.Peer reviewe
Search for heavy resonances in the W/Z-tagged dijet mass spectrum in pp collisions at 7 TeV
A search has been made for massive resonances decaying into a quark and a vector boson, qW or qZ, or a pair of vector bosons, WW, WZ, or ZZ, where each vector boson decays to hadronic final states. This search is based on a data sample corresponding to an integrated luminosity of 5.0 fb^-^1 of proton-proton collisions collected in the CMS experiment at the LHC in 2011 at a center-of-mass energy of 7 TeV. For sufficiently heavy resonances the decay products of each vector boson are merged into a single jet, and the event effectively has a dijet topology. The background from QCD dijet events is reduced using recently developed techniques that resolve jet substructure. A 95% CL lower limit is set on the mass of excited quark resonances decaying into qW (qZ) at 2.38 TeV (2.15 TeV) and upper limits are set on the cross section for resonances decaying to qW, qZ, WW, WZ, or ZZ final states.Peer Reviewe
Search for a non-standard-model Higgs boson decaying to a pair of new light bosons in four-muon final states
Results are reported from a search for non-standard-model Higgs boson decays to pairs of new light bosons, each of which decays into the μ+μ− final state. The new bosons may be produced either promptly or via a decay chain. The data set corresponds to an integrated luminosity of 5.3 fb−1 of proton–proton collisions at View the MathML source, recorded by the CMS experiment at the LHC in 2011. Such Higgs boson decays are predicted in several scenarios of new physics, including supersymmetric models with extended Higgs sectors or hidden valleys. Thus, the results of the search are relevant for establishing whether the new particle observed in Higgs boson searches at the LHC has the properties expected for a standard model Higgs boson. No excess of events is observed with respect to the yields expected from standard model processes. A model-independent upper limit of 0.86±0.06 fb on the product of the cross section times branching fraction times acceptance is obtained. The results, which are applicable to a broad spectrum of new physics scenarios, are compared with the predictions of two benchmark models as functions of a Higgs boson mass larger than 86 GeV/c2 and of a new light boson mass within the range 0.25–3.55 GeV/c2Peer Reviewe
Measurement of the Production Cross Section for Pairs of Isolated Photons in collisions at TeV
The integrated and differential cross sections for the production of pairs of isolated photons is measured in proton-proton collisions at a centre-of-mass energy of 7 TeV with the CMS detector at the LHC. A data sample corresponding to an integrated luminosity of 36 inverse picobarns is analysed. A next-to-leading-order perturbative QCD calculation is compared to the measurements. A discrepancy is observed for regions of the phase space where the two photons have an azimuthal angle difference, , less than approximately 2.8.Peer Reviewe
- …