1,738 research outputs found

    A portrait of the Higgs boson by the CMS experiment ten years after the discovery

    Get PDF
    In July 2012, the ATLAS and CMS collaborations at the CERN Large Hadron Collider announced the observation of a Higgs boson at a mass of around 125 gigaelectronvolts. Ten years later, and with the data corresponding to the production of a 30-times larger number of Higgs bosons, we have learnt much more about the properties of the Higgs boson. The CMS experiment has observed the Higgs boson in numerous fermionic and bosonic decay channels, established its spin–parity quantum numbers, determined its mass and measured its production cross-sections in various modes. Here the CMS Collaboration reports the most up-to-date combination of results on the properties of the Higgs boson, including the most stringent limit on the cross-section for the production of a pair of Higgs bosons, on the basis of data from proton–proton collisions at a centre-of-mass energy of 13 teraelectronvolts. Within the uncertainties, all these observations are compatible with the predictions of the standard model of elementary particle physics. Much evidence points to the fact that the standard model is a low-energy approximation of a more comprehensive theory. Several of the standard model issues originate in the sector of Higgs boson physics. An order of magnitude larger number of Higgs bosons, expected to be examined over the next 15 years, will help deepen our understanding of this crucial sector

    Dataflow Programming and Acceleration of Computationally-Intensive Algorithms

    Get PDF
    The volume of unstructured textual information continues to grow due to recent technological advancements. This resulted in an exponential growth of information generated in various formats, including blogs, posts, social networking, and enterprise documents. Numerous Enterprise Architecture (EA) documents are also created daily, such as reports, contracts, agreements, frameworks, architecture requirements, designs, and operational guides. The processing and computation of this massive amount of unstructured information necessitate substantial computing capabilities and the implementation of new techniques. It is critical to manage this unstructured information through a centralized knowledge management platform. Knowledge management is the process of managing information within an organization. This involves creating, collecting, organizing, and storing information in a way that makes it easily accessible and usable. The research involved the development textual knowledge management system, and two use cases were considered for extracting textual knowledge from documents. The first case study focused on the safety-critical documents of a railway enterprise. Safety is of paramount importance in the railway industry. There are several EA documents including manuals, operational procedures, and technical guidelines that contain critical information. Digitalization of these documents is essential for analysing vast amounts of textual knowledge that exist in these documents to improve the safety and security of railway operations. A case study was conducted between the University of Huddersfield and the Railway Safety Standard Board (RSSB) to analyse EA safety documents using Natural language processing (NLP). A graphical user interface was developed that includes various document processing features such as semantic search, document mapping, text summarization, and visualization of key trends. For the second case study, open-source data was utilized, and textual knowledge was extracted. Several features were also developed, including kernel distribution, analysis offkey trends, and sentiment analysis of words (such as unique, positive, and negative) within the documents. Additionally, a heterogeneous framework was designed using CPU/GPU and FPGAs to analyse the computational performance of document mapping

    Libro de Abstracts | VIII Jornadas de Investigación y Doctorado: “Ética en la Investigación Científica”

    Get PDF
    El objetivo de estas Jornadas es promover el intercambio científico entre estudiantes de doctorado, fomentando la participación, el debate y la discusión, de aspectos científicos tan importantes como la ética de la investigación. Para poner en valor el papel de los doctores en la sociedad, no podemos pasar por alto las competencias transversales que estos deben adquirir en su formación como doctores. Si bien la ética es algo fundamental en todas las facetas de la vida, en el caso de los investigadores cobra especial relevancia, ya que son generadores de conocimiento sobre el que se asentarán futuros desarrollos y políticas de interés para toda la sociedad. Por lo tanto, con el fin de incrementar la proyección social de las investigaciones llevadas a cabo y la proyección profesional de los doctores, es importante incidir en su formación ética. La base de la investigación académica está construida sobre la confianza. Los investigadores confían en que los resultados informados por otros son veraces. La sociedad confía en que los resultados de la investigación reflejan un intento honesto por parte de los científicos de describir el mundo de forma precisa. Pero esta confianza sólo perdurará si la comunidad científica transmite los valores asociados a la conducta de la ética de investigación. Por este motivo, la Universidad juega un papel muy importante en la formación de los doctores en cuestiones éticas que son inherentes al método científico y a la generación de conocimiento. Dentro de las universidades, las Escuelas Internacionales de Doctorado, con nuestros recursos, aptitudes y espacio de influencia, nos convertimos en actores clave para promover actitudes éticas entre los doctorandos, y estas Jornadas son una oportunidad muy valiosa para tratar este tema. Las ramas de conocimiento que se incluyen para estas Jornadas son las derivadas de los programas de doctorado de la EIDUCAM: -Ciencias de la Salud -Tecnologías de la Computación e Ingeniería Ambiental -Ciencias Sociales -Ciencias del DeporteActividad Física y DeporteAdministración y Dirección de EmpresasAgricultura y VeterinariaArte y HumanidadesCiencias AmbientalesCiencias de la AlimentaciónCiencias de la ComunicaciónCiencias ReligiosasDerechoEducaciónEnfermeríaFarmaciaIdiomasIngeniería, Industria y ConstrucciónMedicinaOdontologíaPodologíaPsicologíaTerapia y RehabilitaciónTurism

    Undergraduate Catalog of Studies, 2022-2023

    Get PDF

    A Survey of FPGA Optimization Methods for Data Center Energy Efficiency

    Get PDF
    This article provides a survey of academic literature about field programmable gate array (FPGA) and their utilization for energy efficiency acceleration in data centers. The goal is to critically present the existing FPGA energy optimization techniques and discuss how they can be applied to such systems. To do so, the article explores current energy trends and their projection to the future with particular attention to the requirements set out by the European Code of Conduct for Data Center Energy Efficiency. The article then proposes a complete analysis of over ten years of research in energy optimization techniques, classifying them by purpose, method of application, and impacts on the sources of consumption. Finally, we conclude with the challenges and possible innovations we expect for this sector.Comment: Accepted for publication in IEEE Transactions on Sustainable Computin

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    Optimisation for Optical Data Centre Switching and Networking with Artificial Intelligence

    Get PDF
    Cloud and cluster computing platforms have become standard across almost every domain of business, and their scale quickly approaches O(106)\mathbf{O}(10^6) servers in a single warehouse. However, the tier-based opto-electronically packet switched network infrastructure that is standard across these systems gives way to several scalability bottlenecks including resource fragmentation and high energy requirements. Experimental results show that optical circuit switched networks pose a promising alternative that could avoid these. However, optimality challenges are encountered at realistic commercial scales. Where exhaustive optimisation techniques are not applicable for problems at the scale of Cloud-scale computer networks, and expert-designed heuristics are performance-limited and typically biased in their design, artificial intelligence can discover more scalable and better performing optimisation strategies. This thesis demonstrates these benefits through experimental and theoretical work spanning all of component, system and commercial optimisation problems which stand in the way of practical Cloud-scale computer network systems. Firstly, optical components are optimised to gate in 500ps\approx 500 ps and are demonstrated in a proof-of-concept switching architecture for optical data centres with better wavelength and component scalability than previous demonstrations. Secondly, network-aware resource allocation schemes for optically composable data centres are learnt end-to-end with deep reinforcement learning and graph neural networks, where 3×3\times less networking resources are required to achieve the same resource efficiency compared to conventional methods. Finally, a deep reinforcement learning based method for optimising PID-control parameters is presented which generates tailored parameters for unseen devices in O(103)s\mathbf{O}(10^{-3}) s. This method is demonstrated on a market leading optical switching product based on piezoelectric actuation, where switching speed is improved >20%>20\% with no compromise to optical loss and the manufacturing yield of actuators is improved. This method was licensed to and integrated within the manufacturing pipeline of this company. As such, crucial public and private infrastructure utilising these products will benefit from this work

    Analog Photonics Computing for Information Processing, Inference and Optimisation

    Full text link
    This review presents an overview of the current state-of-the-art in photonics computing, which leverages photons, photons coupled with matter, and optics-related technologies for effective and efficient computational purposes. It covers the history and development of photonics computing and modern analogue computing platforms and architectures, focusing on optimization tasks and neural network implementations. The authors examine special-purpose optimizers, mathematical descriptions of photonics optimizers, and their various interconnections. Disparate applications are discussed, including direct encoding, logistics, finance, phase retrieval, machine learning, neural networks, probabilistic graphical models, and image processing, among many others. The main directions of technological advancement and associated challenges in photonics computing are explored, along with an assessment of its efficiency. Finally, the paper discusses prospects and the field of optical quantum computing, providing insights into the potential applications of this technology.Comment: Invited submission by Journal of Advanced Quantum Technologies; accepted version 5/06/202

    20th SC@RUG 2023 proceedings 2022-2023

    Get PDF

    Research and development for the data, trigger and control card in preparation for Hi-Lumi lhc

    Get PDF
    When the Large Hadron Collider (LHC) increases its luminosity by an order of magnitude in the coming decade, the experiments that sit upon it must also be upgraded to continue to their physics performance in the increasingly demanding environment. To achieve this, the Compact Muon Solenoid (CMS) experiment will make use of tracking information in the Level-1 trigger for the first time, meaning that track reconstruction must be achieved in less than 4 μs in an all-FPGA architecture. MUonE is an experiment aiming to make an accurate measurement of the the hadronic contribution to the anomalous magnetic moment of the muon. It will achieve this by making use of similar apparatus to that designed for CMS and benefit from the research and development efforts there. This thesis presents both development and testing work for the readout chain from tracker module to back-end processing card, as well as the results and analysis of a beam test used to validate this chain for both CMS and the MUonE experiment.Open Acces
    corecore