52 research outputs found

    Macrophages retain hematopoietic stem cells in the spleen via VCAM-1

    Get PDF
    Splenic myelopoiesis provides a steady flow of leukocytes to inflamed tissues, and leukocytosis correlates with cardiovascular mortality. Yet regulation of hematopoietic stem cell (HSC) activity in the spleen is incompletely understood. Here, we show that red pulp vascular cell adhesion molecule 1 (VCAM-1)[superscript +] macrophages are essential to extramedullary myelopoiesis because these macrophages use the adhesion molecule VCAM-1 to retain HSCs in the spleen. Nanoparticle-enabled in vivo RNAi silencing of the receptor for macrophage colony stimulation factor (M-CSFR) blocked splenic macrophage maturation, reduced splenic VCAM-1 expression and compromised splenic HSC retention. Both, depleting macrophages in CD169 iDTR mice or silencing VCAM-1 in macrophages released HSCs from the spleen. When we silenced either VCAM-1 or M-CSFR in mice with myocardial infarction or in ApoE[superscript −/−] mice with atherosclerosis, nanoparticle-enabled in vivo RNAi mitigated blood leukocytosis, limited inflammation in the ischemic heart, and reduced myeloid cell numbers in atherosclerotic plaques

    SEARCH FOR ANOMALIES IN THE COMPUTING JOBS EXECUTION OF THE ATLAS EXPERIMENT WITH THE USE OF VISUAL ANALYTICS

    No full text
    ATLAS is the largest experiment at the LHC. It generates vast volumes of scientific data accompanied with auxiliary metadata. These metadata represent all stages of data processing and Monte-Carlo simulation, as well as characteristics of computing environment, such as software versions and infrastructure parameters, detector geometry and calibration values. The systems responsible for data and workflow management and metadata archiving in ATLAS are called Rucio, ProdSys2, PanDA and AMI. Terabytes of metadata were accumulated over the many years of systems functioning. These metadata can help physicists carrying out studies to evaluate in advance the duration of their analysis jobs. As all these jobs are executed in a heterogeneous distributed and dynamically changing infrastructure, their duration may vary across computing centers and depends on many factors, like memory per core, system software version and flavour, volumes of input datasets and so on. Ensuring the uniformity in jobs execution requires searching for anomalies (for example, jobs with too long execution time) and analyzing the reasons of such behavior to predict and avoid the recurrence in future. The analysis should be implemented on the basis of all historical jobs metadata that are too large to be processed and analyzed by standard means. Detailed analysis of the archive can benefit from application of visual analytics methods providing more easy way of navigation within the multiple internal data correlations. Presented research is the starting point in this direction. The slice of ATLAS jobs archive was analyzed visually, demonstrating the most and the less efficient computing sites. Then, the efficient sites will be compared to inefficient to find out parameters affecting jobs execution time or indicating possible time delays. Further work will concentrate on the increasing of the amount of analyzed jobs and the development of the interactive 3-dimensional visual models, facilitating the interpretation of analysis results

    Search for Anomalies in the Computational Jobs of the ATLAS Experiment with the Application of Visual Analytics

    No full text
    ATLAS is the largest experiment at the LHC. It generates vast volumes of scientific data accompanied with auxiliary metadata, representing all stages of data processing, Monte-Carlo simulation, and characteristics of computing environment. Terabytes of metadata was accumulated by the workflow and data management, and metadata archiving systems. These metadata can help physicists carrying out studies to evaluate in advance the duration of their analysis jobs. As these jobs are executed in a heterogeneous distributed and dynamically changing infrastructure, their duration varies across computing centers and depends on many factors. Ensuring the uniformity in jobs execution requires searching for anomalies and analyzing the reasons of non-trivial jobs execution behavior to predict and avoid the recurrence in future. Detailed analysis of large volume of jobs execution benefits from application of machine learning and visual analysis methods. The approach of visual analytics technique was demonstrated on the analysis of jobs archive. Proposed method allowed to figure out computing sites having non-trivial jobs execution process, and the visual cluster analysis showed parameters affecting or indicating possible time delays. Further work will concentrate on increasing of the amount of analyzed jobs and the development of interactive visual models, facilitating the interpretation of analysis results

    Visual Cluster Analysis for Computing Tasks at Workflow Management System of the ATLAS Experiment

    No full text
    Hundreds of petabytes of experimental data in high energy and nuclear physics (HENP) have already been obtained by unique scientific facilities such as LHC, RHIC, KEK. As the accelerators are being modernized (energy and luminosity were increased), data volumes are rapidly growing and have reached the exabyte scale, that also affects the increasing the number of analysis and data processing tasks, that are competing continuously for computational resources. The increase of processing tasks causes an increase in the performance of the computing environment by the involvement of high-performance computing resources, and forming a heterogeneous distributed computing environment (hundreds of distributed computing centers). In addition, errors happen to occur while executing tasks for data analysis and processing, which are caused by software and hardware failures. With a distributed model of data processing and analysis, the optimization of data management and workload systems becomes a fundamental task, and the lack of its timely solutions leads to economic, functional and time losses. This work describes the first stage of the study aiming at solving the task to increase the stability and efficiency of workflow management systems for mega-science experiments by using visual analytics methods. Using the case of the ATLAS experiment at LHC the visual methods for cluster analysis of the workload management system computing tasks/jobs will be applied. The interdependencies and correlations between various tasks/jobs parameters will be investigated and graphically interpreted in N-dimensional space using 3D projections. Visual analysis allows to identify the similar jobs, as well as anomaly jobs, and to determine by means of which parameters this anomaly is taking place. A further evolvement of the work in this direction will be focused on the increasing the amount of analysed computing jobs and the development of the appropriate infrastructure

    The formation of Fe–Ga–In nanocomposite particles using mechanochemical interaction of Fe with the Ga–In eutectic

    No full text
    Mechanochemical interaction of Fe powder with the liquid Ga-In eutectic in the Fe-rich concentration corner of the Fe-Ga-In phase diagram was studied and compared with binary Fe-Ga and Fe-In specimens. Slow formation of diluted solid solutions in the Fe-In system was confirmed. In the Fe-Ga system, the dominant concentrated A2 solid solution is formed with H-eff = 236 kOe, accompanied by ordered D0(3) and L1(2) phases. The Fe-Ga-In system features a stationary equilibrium between the concentrated A2 phase and D0(3) in the iron matrix.Web of Science5319134901347

    Visual Cluster Analysis for Computing Tasks at Workflow Management System of the ATLAS Experiment at the LHC

    No full text
    Hundreds of petabytes of experimental data in high energy and nuclear physics (HENP) have already been obtained by unique scientific facilities, such as LHC, RHIC, KEK. As the accelerators are being modernized (energy and luminosity are increasing), data volumes are rapidly growing and have reached the exabyte scale, that also leads to an increasing the number of analysis and data processing tasks, that are competing continuously for computational resources. The increase in processing tasks causes an increase in the performance of the computing infrastructure by the involvement of high-performance computing resources and forming a heterogeneous distributed computing environment (hundreds of distributed computing centers). With a distributed model of data processing and analysis, the optimization of data management and workload systems becomes a critical task, and the absence of an adequate solution leads to economic, functional and time losses. This work describes the first stage of the study aiming at solving the task of increasing the stability and efficiency of workflow management systems for mega-science experiments by using visual analytics methods. Using the case of the ATLAS experiment at LHC the visual methods for cluster analysis of the workload management system computing tasks/jobs will be applied. The interdependencies and correlations between various tasks/jobs parameters will be investigated and graphically interpreted in N-dimensional space using 3D projections. Visual analysis allows to identify similar jobs, as well as anomalous jobs, and to determine what causes such anomaly. A further evolvement of the work in this direction will be focused on the increasing the amount of analysed computing jobs and the development of the appropriate infrastructure

    Microbiota of the Colonic Diverticula in the Complicated Form of Diverticulitis: A Case Report

    No full text
    Intestinal microbiota appears to be implicated in the pathogenesis of diverticular disease. We present the case of a patient with diverticular colon disease complicated by a pelvic abscess. During the successful surgical treatment, two specimens were taken from the resected colon segment for the microbiota analysis: an inflamed and perforated diverticulum and a diverticulum without signs of inflammation. Culturing and 16S rRNA gene sequencing revealed significant changes in the microbial community structure and composition associated with the acute inflammation and perforation of the colonic diverticulum. The characteristics that are usually associated with the inflammatory process in the gut, namely reduced microbial diversity and richness, decreased Firmicutes-to-Bacteroidetes (F/B) ratio, depletion of butyrate-producing bacteria, and Enterobacteriaceae blooming, were more pronounced in the non-inflamed diverticulum rather than in the adjacent inflamed and perforated one. This is the first study of the intraluminal microbiota of the diverticular pockets, which is more relevant to the etiology of diverticular disease than mucosa-associated microbiota via biopsies and luminal microbiota via fecal samples

    Methods of Data Popularity Evaluation in the ATLAS Experiment at the LHC

    No full text
    The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing sites all over the world and being processed continuously by various central production and user analysis tasks. The popularity of data is typically measured as the number of accesses and plays an important role in resolving data management issues: deleting, replicating, moving between tapes, disks and caches. This control was still carried out in a semi-manual mode and now we have focused our efforts to automate it, making use of the historical knowledge about existing data management strategies. In this study we describe sources of information about data popularity and demonstrate their consistency. Based on the calculated popularity measurements various distributions were obtained. Auxiliary information about replication and task processing allowed us to evaluate the correspondence between the number of tasks with popular data executed per site and the number of replicas per site. We also examine the popularity of user analysis data that is much less predictable than in the central production and requires more indicators than just the number of accesses.The ATLAS Experiment at the LHC generates petabytes of data that is distributed among 160 computing sites all over the world and is processed continuously by various central production and user analysis tasks. The popularity of data is typically measured as the number of accesses and plays an important role in resolving data management issues: deleting, replicating, moving between tapes, disks and caches. These data management procedures were still carried out in a semi-manual mode and now we have focused our efforts on automating it, making use of the historical knowledge about existing data management strategies. In this study we describe sources of information about data popularity and demonstrate their consistency. Based on the calculated popularity measurements, various distributions were obtained. Auxiliary information about replication and task processing allowed us to evaluate the correspondence between the number of tasks with popular data executed per site and the number of replicas per site. We also examine the popularity of user analysis data that is much less predictable than in the central production and requires more indicators than just the number of accesses

    Enhancements in Functionality of the Interactive Visual Explorer for ATLAS Computing Metadata

    No full text
    The development of the Interactive Visual Explorer (InVEx), a visual analytics tool for ATLAS computing metadata, includes research of various approaches for data handling both on server and on client sides. InVEx is implemented as a web-based application which aims at the enhancing of analytical and visualization capabilities of the existing monitoring tools, and facilitate the process of data analysis with the interactivity and human supervision. The development of InVEx started with the implementation of a 3-dimensional interactive tool for cluster analysis (for the k-means and DBSCAN algorithms), and its further evolvement is closely linked to the needs of ATLAS computing experts, providing metadata analysis to ensure the stability and efficiency of the distributed computing environment functionality. In the process of the integration of InVEx with ATLAS computing metadata sources we faced two main challenges: 1) big data volumes needed to be analyzed in the real time mode (as an example, one ATLAS computing task may contain tens of thousands jobs, each having over two hundred of different parameters), and 2) machine learning clustering algorithms alone are not sufficient for visual cluster analysis - the ability of user-defined clusterization/grouping (by nominal or ordinal parameters) should be added, to make the process of data analysis more manageable. The current work is focused on the architecture enhancements of the InVEx application. First, we will describe the user-manageable data preparation method for cluster analysis. Then, we will present the Level-of-Detail method for the interactive visual data analysis. Beginning with the low detalization, when all data are grouped (by clusterization algorithms or by parameters) and aggregated, we provide users with means to look deeply into this data, incrementally increasing the level of detalization. And finally, the development of data storage format for InVEx is adapted for the Level-of-Detail method to keep all stages of data derivation sequence
    corecore