378 research outputs found

    Análisis de autoreactividad de anticuerpos leucémicos soportado por estrategias de Inteligencia Artificial

    Get PDF
    184 p.Los antígenos son moléculas externas reconocidas por el organismo de variada estructura y naturaleza. El sistema inmune ha desarrollado técnicas de reconocimiento para estos agentes patógenos, representando diferentes mecanismos de defensa contra una posible infección, siendo los anticuerpos los responsables de esta detección. Predecir qué anticuerpo reconocerá a un antígeno, o estimar a nivel cualitativo la intensidad de la interacción que se producirá, es una tarea ardua y compleja, representando un gran desafío en el área inmunológica. Debido a que los antígenos pueden ser distintos tipos de moléculas, y tener procedencia en diferentes patógenos, la forma en la cual un anticuerpo reconoce un conjunto de antígenos con diversas intensidades de interacción, es una pregunta que se ha abordado desde diferentes perspectivas. Por otra parte, el organismo ha desarrollado estrategias para reconocer moléculas externas de aquellas propias. Esto evita que se genere una respuesta inmune sobre tejidos en el organismo. Las moléculas propias del organismo que desencadenan esta respuesta son denominadas auto antígenos, y al proceso de presentar defensas contra estas moléculas se le denomina auto reactividad. El análisis de auto antígenos es de gran relevancia, tanto para el estudio de enfermedades auto inmunes, como para enfermedades relacionadas a células propias del organismo. En el caso de la leucemia, un tipo de cáncer que afecta a células del tejido sanguíneo, el estudio de la auto reactividad y la interacción entre auto antígenos y anticuerpos es de gran relevancia para el diseño y propuestas que permitan diagnosticar y tratar esta enfermedad. Gran parte de los estudios de interacción entre auto antígenos y anticuerpos se han realizado utilizando técnicas experimentales. No obstante, diversos enfoques in-silico han sido desarrollados empleando diferentes herramientas computacionales como docking o simulación molecular para cálculos de energía libre y visualización de interacciones. Pese a su gran utilidad, estas técnicas poseen un alto costo asociados a la necesidad de material experimental, necesidad de poseer estructuras definidas o modelos confiables, elevados tiempos de simulación, entre otros. De esta forma la aplicación de técnicas de machine learning y diversos métodos de codificación representan una alternativa potente al problema de reconocimiento de interacción entre proteína-proteína, particularmente, a secuencias de auto antígenos y anticuerpos de leucemia. A partir de la información de interacciones entre 45 secuencias de cadena pesada de anticuerpos y cerca de 8000 secuencias de auto antígenos, Se diseñó e implemento un sistema predictivo ensamblado cualitativo del nivel de intensidad de la interacción entre auto antígenos y cadenas pesadas de anticuerpos. Como estrategias de entrenamiento de modelos predictivos, se combinaron variados métodos de representación de proteínas, principalmente Natural Language Processing y propiedades fisicoquímicas, con diferentes algoritmos de aprendizaje supervisado logrando un predictor ensamblado con un rendimiento del 81% de accuracy. Se aplicaron diferentes estrategias de validación que permiten demostrar la robustez del sistema predictivo propuesto, incluyendo sistemas de validación cruzada y métodos propios basados en estrategias Leave One Antibody Out. Adicionalmente, se diseñó e implemento un conjunto de colecciones de moléculas inmunológicas integradas en un único sistema de base de datos, al cual acoplado a una estrategia de clasificación filogenética, se diseña e implementa una estrategia de clasificación de secuencias de autoantígenos basado en propiedades descriptivas, funcionales y componentes filogenéticos. La combinación del conjunto de colecciones con el sistema de clasificación, en conjunto con el sistema ensamblado predictivo, facilita el diseño de estrategias de identificación de secuencias autoantígenos y su evaluación contra anticuerpos leucémicos, brindando los soportes iniciales para herramientas de diseño y descubrimiento de antígenos/anticuerpos que cumplan con características relevantes para el problema de la leucemia, denotando la usabilidad de métodos computacionales en problemas complejos de la ingeniería médica. // ABSTRACT: Antigens are external molecules of varied structures and nature recognized by the body. The immune system has developed recognition techniques for these pathogens, representing different defense mechanisms against possible infection, the antibodies responsible for this detection. Predicting which antibody will recognize an antigen or estimating the intensity of the interaction that will occur at a qualitative level is an arduous and complex task, representing a significant challenge in the immunological area. Because antigens can be different types of molecules and have origins in various pathogens, how an antibody recognizes a set of antigens with varying intensities of interaction is a question that has been approached from different perspectives. On the other hand, the organism has developed strategies to recognize external molecules on its own. This prevents an immune response from being generated on tissues in the body. The body’s own molecules that trigger this response are called self-antigens, and the process of presenting defenses against these molecules is called self-reactivity. The analysis of self-antigens is of great relevance, both for studying autoimmune diseases and for diseases related to the body’s own cells. In leukemia, a type of cancer that affects cells of the blood tissue, the study of self-reactivity and the interaction between self-antigens and antibodies is of great relevance for the design of proposals that allow the diagnosis and treatment of this disease. Much of the interaction studies between self-antigens and antibodies have been carried out using experimental techniques. However, various in-silico approaches have been developed using different computational tools such as docking or molecular simulation for free energy calculations and interactive visualization. Despite their great utility, these techniques have a high cost associated with the need for experimental material, the need to have defined structures or reliable models, high simulation times, among others. In this way, the application of machine learning techniques and various coding methods represent a powerful alternative to the problem of protein-protein interaction recognition, particularly to leukemia self-antigen and antibody sequences. From the information of interactions between 45 sequences of antibodies heavy chain and about 8000 sequences of self-antigens, a qualitative assembled predictive system for the level of intensity of the interaction between self-antigens and heavy chains of antibodies was designed and implemented. As predictive model training strategies, various protein representation methods were combined, mainly Natural Language Processing and physicochemical properties, with different supervised learning algorithms, achieving an assembled predictor with a performance of 81% accuracy. Different validation strategies were applied to demonstrate the robustness of the proposed predictive system, including cross-validation systems and proprietary methods based on Leave One Antibody Out strategies. Additionally, a set of collections of immunological molecules integrated into a single database system was designed and implemented. Coupled with a phylogenetic classification strategy, a method for classifying self-antigen sequences based on descriptive properties was designed and implemented. This method uses different functional properties and phylogenetic components to estimate the relation of new sequences with the set of self-antigen sequences. The combination of the group of collections with the classification system, in association with the assembled predictive system, facilitates the design of strategies for the identification of selfantigen sequences and their evaluation against leukemic antibodies, providing the initial supports for tools of creation and discovery of antigens/antibodies that meet relevant characteristics for the leukemia problem, denoting the usability of computational methods in complex issues of medical engineering

    Jet energy measurement with the ATLAS detector in proton-proton collisions at root s=7 TeV

    Get PDF
    The jet energy scale and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of √s = 7TeV corresponding to an integrated luminosity of 38 pb-1. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0. 4 or R=0. 6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pT≥20 GeV and pseudorapidities {pipe}η{pipe}<4. 5. The jet energy systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams, exploiting the transverse momentum balance between central and forward jets in events with dijet topologies and studying systematic variations in Monte Carlo simulations. The jet energy uncertainty is less than 2. 5 % in the central calorimeter region ({pipe}η{pipe}<0. 8) for jets with 60≤pT<800 GeV, and is maximally 14 % for pT<30 GeV in the most forward region 3. 2≤{pipe}η{pipe}<4. 5. The jet energy is validated for jet transverse momenta up to 1 TeV to the level of a few percent using several in situ techniques by comparing a well-known reference such as the recoiling photon pT, the sum of the transverse momenta of tracks associated to the jet, or a system of low-pT jets recoiling against a high-pT jet. More sophisticated jet calibration schemes are presented based on calorimeter cell energy density weighting or hadronic properties of jets, aiming for an improved jet energy resolution and a reduced flavour dependence of the jet response. The systematic uncertainty of the jet energy determined from a combination of in situ techniques is consistent with the one derived from single hadron response measurements over a wide kinematic range. The nominal corrections and uncertainties are derived for isolated jets in an inclusive sample of high-pT jets. Special cases such as event topologies with close-by jets, or selections of samples with an enhanced content of jets originating from light quarks, heavy quarks or gluons are also discussed and the corresponding uncertainties are determined. © 2013 CERN for the benefit of the ATLAS collaboration

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    Measurement of the cross-section of high transverse momentum vector bosons reconstructed as single jets and studies of jet substructure in pp collisions at √s = 7 TeV with the ATLAS detector

    Get PDF
    This paper presents a measurement of the cross-section for high transverse momentum W and Z bosons produced in pp collisions and decaying to all-hadronic final states. The data used in the analysis were recorded by the ATLAS detector at the CERN Large Hadron Collider at a centre-of-mass energy of √s = 7 TeV;{\rm Te}{\rm V}andcorrespondtoanintegratedluminosityof and correspond to an integrated luminosity of 4.6\;{\rm f}{{{\rm b}}^{-1}}.ThemeasurementisperformedbyreconstructingtheboostedWorZbosonsinsinglejets.ThereconstructedjetmassisusedtoidentifytheWandZbosons,andajetsubstructuremethodbasedonenergyclusterinformationinthejetcentreofmassframeisusedtosuppressthelargemultijetbackground.ThecrosssectionforeventswithahadronicallydecayingWorZboson,withtransversemomentum. The measurement is performed by reconstructing the boosted W or Z bosons in single jets. The reconstructed jet mass is used to identify the W and Z bosons, and a jet substructure method based on energy cluster information in the jet centre-of-mass frame is used to suppress the large multi-jet background. The cross-section for events with a hadronically decaying W or Z boson, with transverse momentum {{p}_{{\rm T}}}\gt 320\;{\rm Ge}{\rm V}andpseudorapidity and pseudorapidity |\eta |\lt 1.9,ismeasuredtobe, is measured to be {{\sigma }_{W+Z}}=8.5\pm 1.7$ pb and is compared to next-to-leading-order calculations. The selected events are further used to study jet grooming techniques

    Search for pair-produced long-lived neutral particles decaying to jets in the ATLAS hadronic calorimeter in ppcollisions at √s=8TeV

    Get PDF
    The ATLAS detector at the Large Hadron Collider at CERN is used to search for the decay of a scalar boson to a pair of long-lived particles, neutral under the Standard Model gauge group, in 20.3fb−1of data collected in proton–proton collisions at √s=8TeV. This search is sensitive to long-lived particles that decay to Standard Model particles producing jets at the outer edge of the ATLAS electromagnetic calorimeter or inside the hadronic calorimeter. No significant excess of events is observed. Limits are reported on the product of the scalar boson production cross section times branching ratio into long-lived neutral particles as a function of the proper lifetime of the particles. Limits are reported for boson masses from 100 GeVto 900 GeV, and a long-lived neutral particle mass from 10 GeVto 150 GeV

    Toxocariasis: a silent threat with a progressive public health impact

    Get PDF
    Background: Toxocariasis is a neglected parasitic zoonosis that afflicts millions of the pediatric and adolescent populations worldwide, especially in impoverished communities. This disease is caused by infection with the larvae of Toxocara canis and T. cati, the most ubiquitous intestinal nematode parasite in dogs and cats, respectively. In this article, recent advances in the epidemiology, clinical presentation, diagnosis and pharmacotherapies that have been used in the treatment of toxocariasis are reviewed. Main text: Over the past two decades, we have come far in our understanding of the biology and epidemiology of toxocariasis. However, lack of laboratory infrastructure in some countries, lack of uniform case definitions and limited surveillance infrastructure are some of the challenges that hindered the estimation of global disease burden. Toxocariasis encompasses four clinical forms: visceral, ocular, covert and neural. Incorrect or misdiagnosis of any of these disabling conditions can result in severe health consequences and considerable medical care spending. Fortunately, multiple diagnostic modalities are available, which if effectively used together with the administration of appropriate pharmacologic therapies, can minimize any unnecessary patient morbidity. Conclusions: Although progress has been made in the management of toxocariasis patients, there remains much work to be done. Implementation of new technologies and better understanding of the pathogenesis of toxocariasis can identify new diagnostic biomarkers, which may help in increasing diagnostic accuracy. Also, further clinical research breakthroughs are needed to develop better ways to effectively control and prevent this serious disease
    corecore