6,800 research outputs found

    Search for long lived particles decaying into the semi leptonic di-tau final state with the ATLAS detector at the LHC

    Get PDF
    Many theoretical extensions of the Standard Model predict the existence of new long-lived particles that are within the discovery reach of the Large Hadron Collider (LHC). This thesis presents a search for long-lived particles that decay to a pair of tau leptons, one then decaying hadronically and the other leptonically. Tau final states are on the interface between leptonic and hadronic searches and are much less thoroughly constrained. Several approaches are taken to address some of the experimental challenges encountered in the search for displaced hadronic taus. The development of a novel tau track classification algorithm capable of accurately identifying tracks belonging to taus decaying to one or three charged pions is detailed. The resulting displaced track classifier demonstrates significantly higher efficiency compared to the nominal recommendations. Enhancements made to the existing ATLAS track classification algorithm in preparation for Run 3 data taking at the LHC are also outlined. A newly developed RNN-based algorithm for identifying displaced tau leptons is presented in this thesis. When combined with the displaced track classification algorithm, this results in a displaced tau identification procedure that significantly improves background rejection and signal acceptance for displaced taus in a model-independent way. With efficiency gains of classifying 1-prong taus from about 40% to 80% and 3-prong taus from about 20% to 60%. The thesis primarily presents a methodology combining reconstruction and identification techniques which are then folded into an analysis targeting exotic long-lived particles decaying to tau leptons. This signature-driven analysis targets the first stringent limits on long-lived particles decaying to third generation leptons. Major steps in this analysis have been taken and results presented

    Serverless Strategies and Tools in the Cloud Computing Continuum

    Full text link
    Tesis por compendio[ES] En los últimos años, la popularidad de la computación en nube ha permitido a los usuarios acceder a recursos de cómputo, red y almacenamiento sin precedentes bajo un modelo de pago por uso. Esta popularidad ha propiciado la aparición de nuevos servicios para resolver determinados problemas informáticos a gran escala y simplificar el desarrollo y el despliegue de aplicaciones. Entre los servicios más destacados en los últimos años se encuentran las plataformas FaaS (Función como Servicio), cuyo principal atractivo es la facilidad de despliegue de pequeños fragmentos de código en determinados lenguajes de programación para realizar tareas específicas en respuesta a eventos. Estas funciones son ejecutadas en los servidores del proveedor Cloud sin que los usuarios se preocupen de su mantenimiento ni de la gestión de su elasticidad, manteniendo siempre un modelo de pago por uso de grano fino. Las plataformas FaaS pertenecen al paradigma informático conocido como Serverless, cuyo propósito es abstraer la gestión de servidores por parte de los usuarios, permitiéndoles centrar sus esfuerzos únicamente en el desarrollo de aplicaciones. El problema del modelo FaaS es que está enfocado principalmente en microservicios y tiende a tener limitaciones en el tiempo de ejecución y en las capacidades de computación (por ejemplo, carece de soporte para hardware de aceleración como GPUs). Sin embargo, se ha demostrado que la capacidad de autoaprovisionamiento y el alto grado de paralelismo de estos servicios pueden ser muy adecuados para una mayor variedad de aplicaciones. Además, su inherente ejecución dirigida por eventos hace que las funciones sean perfectamente adecuadas para ser definidas como pasos en flujos de trabajo de procesamiento de archivos (por ejemplo, flujos de trabajo de computación científica). Por otra parte, el auge de los dispositivos inteligentes e integrados (IoT), las innovaciones en las redes de comunicación y la necesidad de reducir la latencia en casos de uso complejos han dado lugar al concepto de Edge computing, o computación en el borde. El Edge computing consiste en el procesamiento en dispositivos cercanos a las fuentes de datos para mejorar los tiempos de respuesta. La combinación de este paradigma con la computación en nube, formando arquitecturas con dispositivos a distintos niveles en función de su proximidad a la fuente y su capacidad de cómputo, se ha acuñado como continuo de la computación en la nube (o continuo computacional). Esta tesis doctoral pretende, por lo tanto, aplicar diferentes estrategias Serverless para permitir el despliegue de aplicaciones generalistas, empaquetadas en contenedores de software, a través de los diferentes niveles del continuo computacional. Para ello, se han desarrollado múltiples herramientas con el fin de: i) adaptar servicios FaaS de proveedores Cloud públicos; ii) integrar diferentes componentes software para definir una plataforma Serverless en infraestructuras privadas y en el borde; iii) aprovechar dispositivos de aceleración en plataformas Serverless; y iv) facilitar el despliegue de aplicaciones y flujos de trabajo a través de interfaces de usuario. Además, se han creado y adaptado varios casos de uso para evaluar los desarrollos conseguidos.[CA] En els últims anys, la popularitat de la computació al núvol ha permès als usuaris accedir a recursos de còmput, xarxa i emmagatzematge sense precedents sota un model de pagament per ús. Aquesta popularitat ha propiciat l'aparició de nous serveis per resoldre determinats problemes informàtics a gran escala i simplificar el desenvolupament i desplegament d'aplicacions. Entre els serveis més destacats en els darrers anys hi ha les plataformes FaaS (Funcions com a Servei), el principal atractiu de les quals és la facilitat de desplegament de petits fragments de codi en determinats llenguatges de programació per realitzar tasques específiques en resposta a esdeveniments. Aquestes funcions són executades als servidors del proveïdor Cloud sense que els usuaris es preocupen del seu manteniment ni de la gestió de la seva elasticitat, mantenint sempre un model de pagament per ús de gra fi. Les plataformes FaaS pertanyen al paradigma informàtic conegut com a Serverless, el propòsit del qual és abstraure la gestió de servidors per part dels usuaris, permetent centrar els seus esforços únicament en el desenvolupament d'aplicacions. El problema del model FaaS és que està enfocat principalment a microserveis i tendeix a tenir limitacions en el temps d'execució i en les capacitats de computació (per exemple, no té suport per a maquinari d'acceleració com GPU). Tot i això, s'ha demostrat que la capacitat d'autoaprovisionament i l'alt grau de paral·lelisme d'aquests serveis poden ser molt adequats per a més aplicacions. A més, la seva inherent execució dirigida per esdeveniments fa que les funcions siguen perfectament adequades per ser definides com a passos en fluxos de treball de processament d'arxius (per exemple, fluxos de treball de computació científica). D'altra banda, l'auge dels dispositius intel·ligents i integrats (IoT), les innovacions a les xarxes de comunicació i la necessitat de reduir la latència en casos d'ús complexos han donat lloc al concepte d'Edge computing, o computació a la vora. L'Edge computing consisteix en el processament en dispositius propers a les fonts de dades per millorar els temps de resposta. La combinació d'aquest paradigma amb la computació en núvol, formant arquitectures amb dispositius a diferents nivells en funció de la proximitat a la font i la capacitat de còmput, s'ha encunyat com a continu de la computació al núvol (o continu computacional). Aquesta tesi doctoral pretén, doncs, aplicar diferents estratègies Serverless per permetre el desplegament d'aplicacions generalistes, empaquetades en contenidors de programari, a través dels diferents nivells del continu computacional. Per això, s'han desenvolupat múltiples eines per tal de: i) adaptar serveis FaaS de proveïdors Cloud públics; ii) integrar diferents components de programari per definir una plataforma Serverless en infraestructures privades i a la vora; iii) aprofitar dispositius d'acceleració a plataformes Serverless; i iv) facilitar el desplegament d'aplicacions i fluxos de treball mitjançant interfícies d'usuari. A més, s'han creat i s'han adaptat diversos casos d'ús per avaluar els desenvolupaments aconseguits.[EN] In recent years, the popularity of Cloud computing has allowed users to access unprecedented compute, network, and storage resources under a pay-per-use model. This popularity led to new services to solve specific large-scale computing challenges and simplify the development and deployment of applications. Among the most prominent services in recent years are FaaS (Function as a Service) platforms, whose primary appeal is the ease of deploying small pieces of code in certain programming languages to perform specific tasks on an event-driven basis. These functions are executed on the Cloud provider's servers without users worrying about their maintenance or elasticity management, always keeping a fine-grained pay-per-use model. FaaS platforms belong to the computing paradigm known as Serverless, which aims to abstract the management of servers from the users, allowing them to focus their efforts solely on the development of applications. The problem with FaaS is that it focuses on microservices and tends to have limitations regarding the execution time and the computing capabilities (e.g. lack of support for acceleration hardware such as GPUs). However, it has been demonstrated that the self-provisioning capability and high degree of parallelism of these services can be well suited to broader applications. In addition, their inherent event-driven triggering makes functions perfectly suitable to be defined as steps in file processing workflows (e.g. scientific computing workflows). Furthermore, the rise of smart and embedded devices (IoT), innovations in communication networks and the need to reduce latency in challenging use cases have led to the concept of Edge computing. Edge computing consists of conducting the processing on devices close to the data sources to improve response times. The coupling of this paradigm together with Cloud computing, involving architectures with devices at different levels depending on their proximity to the source and their compute capability, has been coined as Cloud Computing Continuum (or Computing Continuum). Therefore, this PhD thesis aims to apply different Serverless strategies to enable the deployment of generalist applications, packaged in software containers, across the different tiers of the Cloud Computing Continuum. To this end, multiple tools have been developed in order to: i) adapt FaaS services from public Cloud providers; ii) integrate different software components to define a Serverless platform on on-premises and Edge infrastructures; iii) leverage acceleration devices on Serverless platforms; and iv) facilitate the deployment of applications and workflows through user interfaces. Additionally, several use cases have been created and adapted to assess the developments achieved.Risco Gallardo, S. (2023). Serverless Strategies and Tools in the Cloud Computing Continuum [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/202013Compendi

    Studies of new Higgs boson interactions through nonresonant HH production in the b¯bγγ fnal state in pp collisions at √s = 13 TeV with the ATLAS detector

    Get PDF
    A search for nonresonant Higgs boson pair production in the b ¯bγγ fnal state is performed using 140 fb−1 of proton-proton collisions at a centre-of-mass energy of 13 TeV recorded by the ATLAS detector at the CERN Large Hadron Collider. This analysis supersedes and expands upon the previous nonresonant ATLAS results in this fnal state based on the same data sample. The analysis strategy is optimised to probe anomalous values not only of the Higgs (H) boson self-coupling modifer κλ but also of the quartic HHV V (V = W, Z) coupling modifer κ2V . No signifcant excess above the expected background from Standard Model processes is observed. An observed upper limit µHH < 4.0 is set at 95% confdence level on the Higgs boson pair production cross-section normalised to its Standard Model prediction. The 95% confdence intervals for the coupling modifers are −1.4 < κλ < 6.9 and −0.5 < κ2V < 2.7, assuming all other Higgs boson couplings except the one under study are fxed to the Standard Model predictions. The results are interpreted in the Standard Model efective feld theory and Higgs efective feld theory frameworks in terms of constraints on the couplings of anomalous Higgs boson (self-)interactions

    Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems

    Get PDF
    This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems

    The multivalency game ruling the biology of immunity

    Get PDF
    Macrophages play a crucial role in our immune system, preserving tissue health and defending against harmful pathogens. This article examines the diversity of macrophages influenced by tissue-specific functions and developmental origins, both in normal and disease conditions. Understanding the spectrum of macrophage activation states, especially in pathological situations where they contribute significantly to disease progression, is essential to develop targeted therapies effectively. These states are characterized by unique receptor compositions and phenotypes, but they share commonalities. Traditional drugs that target individual entities are often insufficient. A promising approach involves using multivalent systems adorned with multiple ligands to selectively target specific macrophage populations based on their phenotype. Achieving this requires constructing supramolecular structures, typically at the nanoscale. This review explores the theoretical foundation of engineered multivalent nanosystems, dissecting the key parameters governing specific interactions. The goal is to design targeting systems based on distinct cell phenotypes, providing a pragmatic approach to navigating macrophage heterogeneity's complexities for more effective therapeutic interventions

    The development of bioinformatics workflows to explore single-cell multi-omics data from T and B lymphocytes

    Full text link
    The adaptive immune response is responsible for recognising, containing and eliminating viral infection, and protecting from further reinfection. This antigen-specific response is driven by T and B cells, which recognise antigenic epitopes via highly specific heterodimeric surface receptors, termed T-cell receptors (TCRs) and B cell receptors (BCRs). The theoretical diversity of the receptor repertoire that can be generated via homologous recombination of V, D and J genes is large enough (>1015 unique sequences) that virtually any antigen can be recognised. However, only a subset of these are generated within the human body, and how they succeed in specifically recognising any pathogen(s) and distinguishing these from self-proteins remains largely unresolved. The recent advances in applying single-cell genomics technologies to simultaneously measure the clonality, surface phenotype and transcriptomic signature of pathogen- specific immune cells have significantly improved understanding of these questions. Single-cell multi-omics permits the accurate identification of clonally expanded populations, their differentiation trajectories, the level of immune receptor repertoire diversity involved in the response and the phenotypic and molecular heterogeneity. This thesis aims to develop a bioinformatic workflow utilising single-cell multi-omics data to explore, quantify and predict the clonal and transcriptomic signatures of the human T-cell response during and following viral infection. In the first aim, a web application, VDJView, was developed to facilitate the simultaneous analysis and visualisation of clonal, transcriptomic and clinical metadata of T and B cell multi-omics data. The application permits non-bioinformaticians to perform quality control and common analyses of single-cell genomics data integrated with other metadata, thus permitting the identification of biologically and clinically relevant parameters. The second aim pertains to analysing the functional, molecular and immune receptor profiles of CD8+ T cells in the acute phase of primary hepatitis C virus (HCV) infection. This analysis identified a novel population of progenitors of exhausted T cells, and lineage tracing revealed distinct trajectories with multiple fates and evolutionary plasticity. Furthermore, it was observed that high-magnitude IFN-γ CD8+ T-cell response is associated with the increased probability of viral escape and chronic infection. Finally, in the third aim, a novel analysis is presented based on the topological characteristics of a network generated on pathogen-specific, paired-chain, CD8+ TCRs. This analysis revealed how some cross-reactivity between TCRs can be explained via the sequence similarity between TCRs and that this property is not uniformly distributed across all pathogen-specific TCR repertoires. Strong correlations between the topological properties of the network and the biological properties of the TCR sequences were identified and highlighted. The suite of workflows and methods presented in this thesis are designed to be adaptable to various T and B cell multi-omic datasets. The associated analyses contribute to understanding the role of T and B cells in the adaptive immune response to viral-infection and cancer

    R-Pyocin Regulation, Release, and Susceptibility in Pseudomonas aeruginosa

    Get PDF
    Pseudomonas aeruginosa is a Gram-negative opportunistic pathogen and a major determinant of declining lung function in individuals with cystic fibrosis (CF). P. aeruginosa possesses many intrinsic antibiotic resistance mechanisms and isolates from chronic CF lung infections develop increasing resistance to multiple antibiotics over time. Chronic infection with P. aeruginosa remains one of the main causes of mortality and morbidity in CF patients, thus new therapeutic interventions are necessary. R-type pyocins are narrow spectrum, phage tail-like bacteriocins, specifically produced by P. aeruginosa to kill other strains of P. aeruginosa. Due to their specific anti-pseudomonal activity and similarity to bacteriophage, R-pyocins have potential as additional therapeutics for P. aeruginosa, either in isolation, in combination with antibiotics, or as an alternative to phage therapy. There are five subtypes of R-pyocin (types R1-R5), and it is thought that each P. aeruginosa strain uniquely produces only one of these, suggesting a degree of strain-specificity. P. aeruginosa from CF lung infections develop increasing resistance to antibiotics, making new treatment approaches essential. It is known P. aeruginosa populations in CF chronic lung infection become phenotypically and genotypically diverse over time, however, little is known of the efficacy of R-pyocins against heterogeneous populations. Even less is known regarding the timing and regulation of R-pyocins in CF lung infections, or if P. aeruginosa utilizes R-pyocin production during infection for competition or otherwise – which may influence pressure towards R-pyocin resistance. In this work, I evaluated R-pyocin type and susceptibility among P. aeruginosa isolates sourced from CF infections and found that (i) R1-pyocins are the most prevalent R-type among respiratory infection and CF strains; (ii) a large proportion of P. aeruginosa strains lack R-pyocin genes entirely; (iii) isolates from P. aeruginosa populations collected from the same patient at a single time point have the same R-pyocin type; (iv) there is heterogeneity in susceptibility to R-pyocins within P. aeruginosa populations and (v) susceptibility is likely driven by diversity of LPS phenotypes within clinical populations. These findings suggest that there is likely heterogeneity in response to other types of LPS-binding antimicrobials, including phage, which is important for consideration of antimicrobials as therapeutics. To investigate the prevalence of R2-pyocin susceptible strains in CF, I then utilized 110 isolates of P. aeruginosa collected from five individuals with CF to test for R2-pyocin susceptibility and identify LPS phenotypes. From our collection we i) estimated that approximately 83% of sputum samples contain heterogenous P. aeruginosa populations without R2-pyocin resistant isolates and all sputum samples contained susceptible isolates; ii) we found that there is no correlation between R2-pyocin susceptibility and LPS phenotypes, and iii) we estimate that approximately 76% of isolates sampled from sputum lack O-specific antigen, 42% lack common antigen, and 27% exhibit altered LPS cores. This finding highlights that perhaps LPS packing density may play a more influential role in mediating R-pyocin susceptibility in infection. Finding the majority of our sampled P. aeruginosa populations to be R2-pyocin susceptible further supports the potential of these narrow-spectrum antimicrobials despite facing heterogenous susceptibility among diverse populations. In order to evaluate how R-pyocins may influence strain competition and growth in CF lung infection, I assessed R-pyocin activity in an infection-relevant environment (Synthetic Cystic Fibrosis Sputum Medium; SCFM2) and found that (i) R-pyocins genes are transcribed more in the CF nutrient environment than in rich laboratory medium and (ii) in a structured, CF-like environment, R-pyocin induction is costly to producing strains in competition rather than beneficial. Our work suggests that R-pyocins may not be essential in CF lung infection and can be costly to producing cells in the presence of stress response-inducing stimuli, such as those commonly found in infection. In this thesis I have studied R-pyocin susceptibility, regulation and release utilizing a biobank of whole populations of P. aeruginosa collected from 11 individuals with CF, as well as the CF infection model (SCFM) to understand the mechanisms of R-pyocin activity in an infection-relevant context and the role R-pyocins play in shaping P. aeruginosa populations during infection. The findings of this work have illuminated the impact of P. aeruginosa heterogeneity on R-pyocin susceptibility, furthered our understanding of R-pyocins as potential therapeutics, and built upon our knowledge of bacteriocin-mediated interactions.Ph.D

    Medical Image Analysis using Deep Relational Learning

    Full text link
    In the past ten years, with the help of deep learning, especially the rapid development of deep neural networks, medical image analysis has made remarkable progress. However, how to effectively use the relational information between various tissues or organs in medical images is still a very challenging problem, and it has not been fully studied. In this thesis, we propose two novel solutions to this problem based on deep relational learning. First, we propose a context-aware fully convolutional network that effectively models implicit relation information between features to perform medical image segmentation. The network achieves the state-of-the-art segmentation results on the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778

    GNNBuilder: An Automated Framework for Generic Graph Neural Network Accelerator Generation, Simulation, and Optimization

    Full text link
    There are plenty of graph neural network (GNN) accelerators being proposed. However, they highly rely on users' hardware expertise and are usually optimized for one specific GNN model, making them challenging for practical use . Therefore, in this work, we propose GNNBuilder, the first automated, generic, end-to-end GNN accelerator generation framework. It features four advantages: (1) GNNBuilder can automatically generate GNN accelerators for a wide range of GNN models arbitrarily defined by users; (2) GNNBuilder takes standard PyTorch programming interface, introducing zero overhead for algorithm developers; (3) GNNBuilder supports end-to-end code generation, simulation, accelerator optimization, and hardware deployment, realizing a push-button fashion for GNN accelerator design; (4) GNNBuilder is equipped with accurate performance models of its generated accelerator, enabling fast and flexible design space exploration (DSE). In the experiments, first, we show that our accelerator performance model has errors within 36%36\% for latency prediction and 18%18\% for BRAM count prediction. Second, we show that our generated accelerators can outperform CPU by 6.33×6.33\times and GPU by 6.87×6.87\times. This framework is open-source, and the code is available at https://anonymous.4open.science/r/gnn-builder-83B4/.Comment: 10 pages, 7 figures, 4 tables, 3 listing
    corecore