12 research outputs found

    Multi-aspect testing and ranking inference to quantify dimorphism in the cytoarchitecture of cerebellum of male, female and intersex individuals: a model applied to bovine brains.

    Get PDF
    The dimorphism among male, female and freemartin intersex bovines, focusing on the vermal lobules VIII and IX, was analyzed using a novel data analytics approach to quantify morphometric differences in the cytoarchitecture of digitalized sections of the cerebellum. This methodology consists of multivariate and multi-aspect testing for cytoarchitecture-ranking, based on neuronal cell complexity among populations defined by factors, such as sex, age or pathology. In this context, we computed a set of shape descriptors of the neural cell morphology, categorized them into three domains named size, regularity and density, respectively. The output and results of our methodology are multivariate in nature, allowing an in-depth analysis of the cytoarchitectonic organization and morphology of cells. Interestingly, the Purkinje neurons and the underlying granule cells revealed the same morphological pattern: female possessed larger, denser and more irregular neurons than males. In the Freemartin, Purkinje neurons showed an intermediate setting between males and females, while the granule cells were the largest, most regular and dense. This methodology could be a powerful instrument to carry out morphometric analysis providing robust bases for objective tissue screening, especially in the field of neurodegenerative pathologies

    Multi-aspect testing and ranking inference to quantify dimorphism in the cytoarchitecture of cerebellum of male, female and intersex individuals: a model applied to bovine brains

    Get PDF
    The dimorphism among male, female and freemartin intersex bovines, focusing on the vermal lobules VIII and IX, was analyzed using a novel data analytics approach to quantify morphometric differences in the cytoarchitecture of digitalized sections of the cerebellum. This methodology consists of multivariate and multi-aspect testing for cytoarchitecture-ranking, based on neuronal cell complexity among populations defined by factors, such as sex, age or pathology. In this context, we computed a set of shape descriptors of the neural cell morphology, categorized them into three domains named size, regularity and density, respectively. The output and results of our methodology are multivariate in nature, allowing an in-depth analysis of the cytoarchitectonic organization and morphology of cells. Interestingly, the Purkinje neurons and the underlying granule cells revealed the same morphological pattern: female possessed larger, denser and more irregular neurons than males. In the Freemartin, Purkinje neurons showed an intermediate setting between males and females, while the granule cells were the largest, most regular and dense. This methodology could be a powerful instrument to carry out morphometric analysis providing robust bases for objective tissue screening, especially in the field of neurodegenerative pathologies

    Analysis of Replica Selection Protocols for Grid Data Access Services

    No full text
    Efficient Data Grids should provide users with services for optimised access to data. In the design of data access services is therefore extremely relevant to evaluate and compare adaptable data selection algorithms, especially when location and number of file copies are expected to change dynamically over time. In this paper we use Petri Nets to formally evaluate candidate protocols to be used by data access services for selection of Grid-wide replicated data files that need to be analysed by Grid jobs. In particular, we compare a centralised Replica Catalogue-based protocol with a Peer-2-Peer protocol, where there is no centralised knowledge about data location. Our results show that the Peer-to-Peer protocol is more scalable than the centralised one and outperforms it when used in Data Grids having more than few site

    Next-Generation EU DataGrid Management Services

    No full text
    We describe the architecture and initial implementation of the next-generation of Grid Data Management Middleware in the EU DataGrid (EDG) project. The new architecture stems from our experience together with the user requirements gathered during the two years of running our initial set of Grid Data Management Services. All of our new services are based on the Web Service technology paradigm, very much in line with the emerging Open Grid Services Architecture (OGSA). We have modularized our components and invested a great amount of effort in developing secure, extensible and robust services, starting from the design but also using a streamlined build and testing framework. Our service components are: Replica Location Service, Replica Metadata Service, Replica Optimization Service, Replica Subscription and high-level replica management. The service security infrastructure is fully GSI-enabled, hence compatible with the existing Globus Toolkit 2-based services; moreover, it allows for fine-grained autho- rization mechanisms that can be adjusted depending on the service semantic

    Optorsim: a simulation tool for scheduling and replica optimisation in data grids

    No full text
    In large-scale Grids, the replication of files to different sites is an important data management mechanism which can reduce access latencies and give improved usage of resources such as network bandwidth, storage and computing power. In the search for an optimal data replication strategy, the Grid simulator OptorSim was developed as part of the European DataGrid project. Simulations of various HEP Grid scenarios have been undertaken using different job scheduling and file replication algorithms, with the experimental emphasis being on physics analysis use-cases. Previously, the CMS Data Challenge 2002 testbed and UK GridPP testbed were among those simulated; recently, our focus has been on the LCG testbed. A novel economy-based strategy has been investigated as well as more traditional methods, with the economic models showing distinct advantages for heavily loaded grid

    Grid Performance Measurements with OptorSim

    No full text
    Grid computing is fast emerging as the solution to the problems posed by the massive computational and data handling requirements of many current international scientific projects. Simulation of the Grid environment is important to evaluate the impact of potential data handling strategies before being deployed on the Grid. In this paper, we look at the effects of various data replication strategies and compare them in a variety of Grid scenarios, evaluating several performance metrics. We use the Grid simulator OptorSim, and base our simulations on a world-wide Grid testbed for data intensive high energy physics experiments. Our results show that the choice of scheduling and data replication strategies can have a large effect on both job throughput and the overall consumption of Grid resource
    corecore