87 research outputs found

    Programming and parallelising applications for distributed infrastructures

    Get PDF
    The last decade has witnessed unprecedented changes in parallel and distributed infrastructures. Due to the diminished gains in processor performance from increasing clock frequency, manufacturers have moved from uniprocessor architectures to multicores; as a result, clusters of computers have incorporated such new CPU designs. Furthermore, the ever-growing need of scienti c applications for computing and storage capabilities has motivated the appearance of grids: geographically-distributed, multi-domain infrastructures based on sharing of resources to accomplish large and complex tasks. More recently, clouds have emerged by combining virtualisation technologies, service-orientation and business models to deliver IT resources on demand over the Internet. The size and complexity of these new infrastructures poses a challenge for programmers to exploit them. On the one hand, some of the di culties are inherent to concurrent and distributed programming themselves, e.g. dealing with thread creation and synchronisation, messaging, data partitioning and transfer, etc. On the other hand, other issues are related to the singularities of each scenario, like the heterogeneity of Grid middleware and resources or the risk of vendor lock-in when writing an application for a particular Cloud provider. In the face of such a challenge, programming productivity - understood as a tradeo between programmability and performance - has become crucial for software developers. There is a strong need for high-productivity programming models and languages, which should provide simple means for writing parallel and distributed applications that can run on current infrastructures without sacri cing performance. In that sense, this thesis contributes with Java StarSs, a programming model and runtime system for developing and parallelising Java applications on distributed infrastructures. The model has two key features: first, the user programs in a fully-sequential standard-Java fashion - no parallel construct, API call or pragma must be included in the application code; second, it is completely infrastructure-unaware, i.e. programs do not contain any details about deployment or resource management, so that the same application can run in di erent infrastructures with no changes. The only requirement for the user is to select the application tasks, which are the model's unit of parallelism. Tasks can be either regular Java methods or web service operations, and they can handle any data type supported by the Java language, namely les, objects, arrays and primitives. For the sake of simplicity of the model, Java StarSs shifts the burden of parallelisation from the programmer to the runtime system. The runtime is responsible from modifying the original application to make it create asynchronous tasks and synchronise data accesses from the main program. Moreover, the implicit inter-task concurrency is automatically found as the application executes, thanks to a data dependency detection mechanism that integrates all the Java data types. This thesis provides a fairly comprehensive evaluation of Java StarSs on three di erent distributed scenarios: Grid, Cluster and Cloud. For each of them, a runtime system was designed and implemented to exploit their particular characteristics as well as to address their issues, while keeping the infrastructure unawareness of the programming model. The evaluation compares Java StarSs against state-of-the-art solutions, both in terms of programmability and performance, and demonstrates how the model can bring remarkable productivity to programmers of parallel distributed applications

    A caching mechanism to exploit object store speed in High Energy Physics analysis

    Full text link
    [EN] Data analysis workflows in High Energy Physics (HEP) read data written in the ROOT columnar format. Such data has traditionally been stored in files that are often read via the network from remote storage facilities, which represents a performance penalty especially for data processing workflows that are I/O bound. To address that issue, this paper presents a new caching mechanism, implemented in the I/O subsystem of ROOT, which is independent of the storage backend used to write the dataset. Notably, it can be used to leverage the speed of high-bandwidth, low-latency object stores. The performance of this caching approach is evaluated by running a real physics analysis on an Intel DAOS cluster, both on a single node and distributed on multiple nodes.This work benefited from the support of the CERN Strategic R&D Programme on Technologies for Future Experiments [1] and from grant PID2020-113656RB-C22 funded by Ministerio de Ciencia e Innovacion MCIN/AEI/10.13039/501100011033. The hardware used to perform the experimental evaluation involving DAOS (HPE Delphi cluster described in Sect. 5.2) was made available thanks to a collaboration agreement with Hewlett-Packard Enterprise (HPE) and Intel. User access to the Virgo cluster at the GSI institute was given for the purpose of running the benchmarks using the Lustre filesystem.Padulano, VE.; Tejedor Saavedra, E.; Alonso-Jordá, P.; López Gómez, J.; Blomer, J. (2022). A caching mechanism to exploit object store speed in High Energy Physics analysis. Cluster Computing. 1-16. https://doi.org/10.1007/s10586-022-03757-211

    Prototyping a ROOT-based distributed analysis workflow for HL-LHC: the CMS use case

    Full text link
    The challenges expected for the next era of the Large Hadron Collider (LHC), both in terms of storage and computing resources, provide LHC experiments with a strong motivation for evaluating ways of rethinking their computing models at many levels. Great efforts have been put into optimizing the computing resource utilization for the data analysis, which leads both to lower hardware requirements and faster turnaround for physics analyses. In this scenario, the Compact Muon Solenoid (CMS) collaboration is involved in several activities aimed at benchmarking different solutions for running High Energy Physics (HEP) analysis workflows. A promising solution is evolving software towards more user-friendly approaches featuring a declarative programming model and interactive workflows. The computing infrastructure should keep up with this trend by offering on the one side modern interfaces, and on the other side hiding the complexity of the underlying environment, while efficiently leveraging the already deployed grid infrastructure and scaling toward opportunistic resources like public cloud or HPC centers. This article presents the first example of using the ROOT RDataFrame technology to exploit such next-generation approaches for a production-grade CMS physics analysis. A new analysis facility is created to offer users a modern interactive web interface based on JupyterLab that can leverage HTCondor-based grid resources on different geographical sites. The physics analysis is converted from a legacy iterative approach to the modern declarative approach offered by RDataFrame and distributed over multiple computing nodes. The new scenario offers not only an overall improved programming experience, but also an order of magnitude speedup increase with respect to the previous approach

    XtreemOS application execution management: a scalable approach

    Get PDF
    Designing a job management system for the Grid is a non-trivial task. While a complex middleware can give a lot of features, it often implies sacrificing performance. Such performance loss is especially noticeable for small jobs. A Job Manager’s design also affects the capabilities of the monitoring system. We believe that monitoring a job or asking for a job status should be fast and easy, like doing a simple ’ps’. In this paper, we present the job management of XtreemOS - a Linux-based operating system to support Virtual Organizations for Grid. This management is performed inside the Application Execution Manager (AEM). We evaluate its performance using only one job manager plus the built-in monitoring infrastructure. Furthermore, we present a set of real-world applications using AEM and its features. In XtreemOS we avoid reinventing the wheel and use the Linux paradigm as an abstraction.Peer ReviewedPostprint (published version

    Useful pharmacological parameters for G-protein-coupled receptor homodimers obtained from competition experiments. Agonist-antagonist binding modulation

    Get PDF
    Many G-protein-coupled receptors (GPCRs) are expressed on the plasma membrane as dimers. Since drug binding data are currently fitted using equations developed for monomeric receptors, the interpretation of the pharmacological data are equivocal in many cases. As reported here, GPCR dimer models account for changes in competition curve shape as a function of the radioligand concentration used, something that cannot be explained by monomeric receptor models. Macroscopic equilibrium dissociation constants for the agonist and homotropic cooperativity index reflecting the intramolecular communication within the dopamine D1 or adenosine A2A receptor homodimer as well as hybrid equilibrium dissociation constant, which reflects the antagonist/agonist modulation may be calculated by fitting binding data from antagonist/agonist competition experiments to equations developed from dimer receptor models. Comparing fitting the data by assuming a classical monomeric receptor model or a dimer model, it is shown that dimer receptor models provide more clues useful in drug discovery than monomer-based models

    Functional μ-opioid-galanin receptor heteromers in the ventral tegmental area

    Get PDF
    The neuropeptide galanin has been shown to interact with the opioid system. More specifically, galanin counteracts the behavioral effects of the systemic administration of μ-opioid receptor (MOR) agonists. Yet the mechanism responsible for this galanin-opioid interaction has remained elusive. Using biophysical techniques in mammalian transfected cells, we found evidence for selective heteromerization of MOR and the galanin receptor subtype Gal1 (Gal1R). Also in transfected cells, a synthetic peptide selectively disrupted MOR-Gal1R heteromerization as well as specific interactions between MOR and Gal1R ligands: a negative cross talk, by which galanin counteracted MAPK activation induced by the endogenous MOR agonist endomorphin-1, and a cross-antagonism, by which a MOR antagonist counteracted MAPK activation induced by galanin. These specific interactions, which represented biochemical properties of the MOR-Gal1R heteromer, could then be identified in situ in slices of rat ventral tegmental area (VTA) with MAPK activation and two additional cell signaling pathways, AKT and CREB phosphorylation. Furthermore, in vivo microdialysis experiments showed that the disruptive peptide selectively counteracted the ability of galanin to block the dendritic dopamine release in the rat VTA induced by local infusion of endomorphin-1, demonstrating a key role of MOR-Gal1R heteromers localized in the VTA in the direct control of dopamine cell function and their ability to mediate antagonistic interactions between MOR and Gal1R ligands. The results also indicate that MOR-Gal1R heteromers should be viewed as targets for the treatment of opioid use disorders

    Software Challenges For HL-LHC Data Analysis

    Full text link
    The high energy physics community is discussing where investment is needed to prepare software for the HL-LHC and its unprecedented challenges. The ROOT project is one of the central software players in high energy physics since decades. From its experience and expectations, the ROOT team has distilled a comprehensive set of areas that should see research and development in the context of data analysis software, for making best use of HL-LHC's physics potential. This work shows what these areas could be, why the ROOT team believes investing in them is needed, which gains are expected, and where related work is ongoing. It can serve as an indication for future research proposals and cooperations

    Ecto-nucleotidases activities in the contents of ovarian endometriomas: potential biomarkers of endometriosis

    Get PDF
    Endometriosis, defined as the growth of endometrial tissue outside the uterus, is a common gynecologic condition affecting millions of women worldwide. It is an inflammatory, estrogen-dependent complex disorder, with broad symptomatic variability, pelvic pain, and infertility being the main characteristics. Ovarian endometriomas are frequently developed in women with endometriosis. Late diagnosis is one of the main problems of endometriosis; thus, it is important to identify biomarkers for early diagnosis. The aim of the present work is to evaluate the ecto-nucleotidases activities in the contents of endometriomas. These enzymes, through the regulation of extracellular ATP and adenosine levels, are key enzymes in inflammatory processes, and their expression has been previously characterized in human endometrium. To achieve our objective, the echo-guided aspirated fluids of endometriomas were analyzed by evaluating the ecto-nucleotidases activities and compared with simple cysts. Our results show that enzyme activities are quantifiable in the ovarian cysts aspirates and that endometriomas show significantly higher ecto-nucleotidases activities than simple cysts (5.5-fold increase for ATPase and 20-fold for ADPase), thus being possible candidates for new endometriosis biomarkers. Moreover, we demonstrate the presence of ecto-nucleotidases bearing exosomes in these fluids. These results add up to the knowledge of the physiopathologic mechanisms underlying endometriosis and, open up a promising new field of study

    In Vivo Evaluation of 3-Dimensional Polycaprolactone Scaffolds for Cartilage Repair in Rabbits

    Full text link
    Background: Cartilage tissue engineering using synthetic scaffolds allows maintaining mechanical integrity and withstanding stress loads in the body, as well as providing a temporary substrate to which transplanted cells can adhere. Purpose: This study evaluates the use of polycaprolactone (PCL) scaffolds for the regeneration of articular cartilage in a rabbit model. Study Design: Controlled laboratory study. Methods: Five conditions were tested to attempt cartilage repair. To compare spontaneous healing (from subchondral plate bleeding) and healing due to tissue engineering, the experiment considered the use of osteochondral defects (to allow blood flow into the defect site) alone or filled with bare PCL scaffold and the use of PCL-chondrocytes constructs in chondral defects. For the latter condition, 1 series of PCL scaffolds was seeded in vitro with rabbit chondrocytes for 7 days and the cell/scaffold constructs were transplanted into rabbits’ articular defects, avoiding compromising the subchondral bone. Cell pellets and bare scaffolds were implanted as controls in a chondral defect. Results: After 3 months with PCL scaffolds or cells/PCL constructs, defects were filled with white cartilaginous tissue; integration into the surrounding native cartilage was much better than control (cell pellet). The engineered constructs showed histologically good integration to the subchondral bone and surrounding cartilage with accumulation of extracellular matrix including type II collagen and glycosaminoglycan. The elastic modulus measured in the zone of the defect with the PCL/cells constructs was very similar to that of native cartilage, while that of the pellet-repaired cartilage was much smaller than native cartilage. Conclusion: The results are quite promising with respect to the use of PCL scaffolds as aids for the regeneration of articular cartilage using tissue engineering techniques.The support of the Spanish Ministry of Science through projects No. MAT2007-66759-C03-01 and MAT2007-66759C03-02 (including FEDER financial support) is acknowledged. Dr Gomez Tejedor acknowledges the support given by the government of Valencia, the Generalitat Valenciana, through the GVPRE/2008/160 project. The support of Grant 2005SGR 00762 and 2005SGR 00848 (Catalan Department of Universities, Research and the Information Society) is also acknowledged. The Aging and Fragile Elderly cooperative research network (Red Tematica de Investigacion Cooperativa en Envejecimiento y Fragilidad [RETICEF]) and the Bioengineering, Biomaterials and Nanomedicine research network (Centro de Investigacion Biomedica en Red en Bioingenieria, Biomateriales y Nanomedicina [CIBER BBN]) are initiatives of the Instituto de Salud Carlos III (ISCIII). The group of the Centro de Investigacion Principe Felipe (CIPF) acknowledges funding in the framework of the collaboration agreement among the ISCIII, the Conselleria de Sanidad de la Comunidad Valenciana, and the CIPF for the "Investigacion Basica y Traslacional en Medicina Regenerativa."Martinez-Diaz, S.; Garcia-Giralt, N.; Lebourg, MM.; Gómez-Tejedor, JA.; Vila, G.; Caceres, E.; Benito, P.... (2010). In Vivo Evaluation of 3-Dimensional Polycaprolactone Scaffolds for Cartilage Repair in Rabbits. American Journal of Sports Medicine. 38(3):509-519. https://doi.org/10.1177/0363546509352448S50951938
    corecore