83 research outputs found

    GATE : a simulation toolkit for PET and SPECT

    Get PDF
    Monte Carlo simulation is an essential tool in emission tomography that can assist in the design of new medical imaging devices, the optimization of acquisition protocols, and the development or assessment of image reconstruction algorithms and correction techniques. GATE, the Geant4 Application for Tomographic Emission, encapsulates the Geant4 libraries to achieve a modular, versatile, scripted simulation toolkit adapted to the field of nuclear medicine. In particular, GATE allows the description of time-dependent phenomena such as source or detector movement, and source decay kinetics. This feature makes it possible to simulate time curves under realistic acquisition conditions and to test dynamic reconstruction algorithms. A public release of GATE licensed under the GNU Lesser General Public License can be downloaded at the address http://www-lphe.epfl.ch/GATE/

    PerfSONAR: A Service Oriented Architecture for Multi-domain Network Monitoring

    Full text link
    Abstract. In the area of network monitoring a lot of tools are already available to measure a variety of metrics. However, these tools are often limited to a single administrative domain so that no established methodology for the monitoring of network connections spanning over multiple domains currently exists. In addition, these tools only monitor the network from a technical point of view without providing meaningful network performance indicators for different user groups. These indicators should be derived from the measured basic metrics. In this paper a Service Oriented Architecture is presented which is able to perform multi-domain measurements without being limited to specific kinds of metrics. A Service Oriented Architecture has been chosen as it allows for increased flexibility and scalability in comparison to traditional software engineering techniques. The resulting measurement framework will be applied for measurement

    A contribution for data processing and interoperability in Industry 4.0

    Get PDF
    Dissertação de mestrado em Engenharia de SistemasIndustry 4.0 is expected to drive a significant change in companies’ growth. The idea is to cluster important information from all the company’s supply chain, enabling valuable decision-making while permitting interactions between machines and humans in real time. Autonomous systems powered with Information Technologies are enablers of Industry 4.0 – like Internet of Things (IoT), Cyber Physical-Systems (CPS) and Big Data and analytics. IoT gather information from every piece of the big puzzle which is the manufacturing process. Cloud Computing store all that information in one place. People share information across the company, between its supply chain and hierarchical levels through integration of systems. Finally, Big Data and analytics are of intelligence that will improve Industry 4.0. Methods and tools in Industry 4.0 are designed to increase interoperability across industrial stakeholders. In order to make the complete process possible, standardisation must be implemented across the company. Two reference models for Industry 4.0 were studied - RAMI 4.0 and IIRA. RAMI 4.0, a German initiative, focuses on industrial digitalization while IIRA, an American initiative, focuses on “Internet of Things” world, i.e. energy, healthcare and transportation. The two initiatives aim to obtain intelligence data from processes while enabling interoperability among systems. Representatives from the two reference models are working together on the technological interface standards that could be used by companies joining this new era. This study aims at the interoperability between systems. Even though there must be a model to guide the company into Industry 4.0, this model ought to be mutable and flexible enough to handle differences in manufacturing process, as an example automotive industry 4.0 will not have the same approach as aviation Industry 4.0.Espera-se que a Indústria 4.0 seja uma mudança significativa no crescimento das empresas. O objetivo é agrupar informações importantes de toda a cadeia de suprimentos da empresa, proporcionando uma tomada de decisão mais acertada, ao mesmo tempo que permite interações entre seres humanos e máquinas em tempo real. Sistemas autônomos equipados com Tecnologias da Informação possibilitam a Indústria 4.0 como a Internet das Coisas (IoT), sistemas ciber-físicos (CPS) e Big Data e analytics. A IoT coleta informações de cada peça do grande quebra-cabeça que é o processo de fabricação. Cloud Computing lida com armazenamento de toda essa informação em um só lugar. As pessoas compartilham informações em toda a empresa, na cadeia de abastecimento e níveis hierárquicos por meio da integração de sistemas. Por fim, Big Data e analytics são de inteligência que melhorarão a Indústria 4.0. Os métodos e ferramentas da Indústria 4.0 são projetadas para aumentar a interoperabilidade entre os stakeholders. Para tornar possível essa interoperabilidade, um padrão em toda a empresa deve ser implementado. Dois modelos de referência para a Indústria 4.0 foram estudados - RAMI 4.0 e IIRA. RAMI 4.0, a iniciativa alemã, concentra-se na digitalização industrial, enquanto IIRA, a iniciativa americana, foca no mundo da Internet das Coisas, como energia, saúde e transporte. As duas iniciativas visam obter dados inteligentes dos processos e, ao mesmo tempo, permitir a interoperabilidade entre os sistemas. Representantes dos dois modelos de referência estão a trabalhar juntos para discutir os padrões de interface tecnológica que podem ser usados pelas empresas que entram nessa nova era. Este estudo visa a interoperabilidade entre sistemas. Embora deva haver um modelo para orientar a empresa na Indústria 4.0, esse modelo deve ser mutável e flexível o suficiente para lidar com diferenças no processo de fabricação, como exemplo a indústria 4.0 automotiva não terá a mesma abordagem que a Indústria 4.0 de aviação

    Data State of Play - Compliance Testing and Interoperability Checking

    Get PDF
    The document provides an inventory of existing solutions for compliance testing and interoperability checking for data taking into account the draft INSPIRE data specifications conceptual model (D2.5), the first draft of the INSPIRE Methodology for the development of data specifications (D2.6) and the first draft of the data Specifications Guidelines for the encoding of spatial data (D2.7). Even if the emphasis is on spatial and geographical data, the document investigates applicable solutions outside the geographical Information System domain, with a particular attention paid to checking compliance with ¿application schemas¿ as defined in the previously mentioned documents.JRC.H.6-Spatial data infrastructure

    A Grid architectural approach applied for backward compatibility to a production system for events simulation.

    Get PDF
    Distributed systems paradigm gained in popularity during the last 15 years, thanks also to the broad diffusion of distributed frameworks proposed for the Internet plat form. In the late ’90s a new concept started to play a main role in the field of distributed computing: the Grid. This thesis presents a study related to the integration between the BaBar’s framework, an experiment belonging to the High Energy Physics field, and a grid system like the one implemented by the Italian National Institute for Nuclear Physics (INFN), the INFNGrid project, which provides support for several research domains. The main goal was to succeed in adapt an already well established system, like the one implemented into the BaBar pipeline and based on local centers not interconnected between themselves, to a kind of technology that was not ready by the time the experiment’s framework was designed. Despite this new approach was related just to some aspects of the experiment, the production of simulated events by using MonteCarlo methods, the efforts here described represent an example of how an old experiment can bridge the gap toward the Grid computing, even adopting solutions designed for more recent projects. The complete evolution of this integration will be explained starting from the earlier stages until the actual development to state the progresses achieved, presenting results that are comparable with production rates gained using the conventional BaBar’s approach, in order to examine the potentially benefits and drawbacks on a concrete case study

    Dimensionerings- en werkverdelingsalgoritmen voor lambda grids

    Get PDF
    Grids bestaan uit een verzameling reken- en opslagelementen die geografisch verspreid kunnen zijn, maar waarvan men de gezamenlijke capaciteit wenst te benutten. Daartoe dienen deze elementen verbonden te worden met een netwerk. Vermits veel wetenschappelijke applicaties gebruik maken van een Grid, en deze applicaties doorgaans grote hoeveelheden data verwerken, is het noodzakelijk om een netwerk te voorzien dat dergelijke grote datastromen op betrouwbare wijze kan transporteren. Optische transportnetwerken lenen zich hier uitstekend toe. Grids die gebruik maken van dergelijk netwerk noemt men lambda Grids. Deze thesis beschrijft een kader waarin het ontwerp en dimensionering van optische netwerken voor lambda Grids kunnen beschreven worden. Ook wordt besproken hoe werklast kan verdeeld worden op een Grid eens die gedimensioneerd is. Een groot deel van de resultaten werd bekomen door simulatie, waarbij gebruik gemaakt wordt van een eigen Grid simulatiepakket dat precies focust op netwerk- en Gridelementen. Het ontwerp van deze simulator, en de daarbijhorende implementatiekeuzes worden dan ook uitvoerig toegelicht in dit werk

    Learning by doing on the EGEE GRID and first performance analysis of CODESA-3D multirun submission

    Get PDF
    The project TEMA (Training on Environmental Modelling and Applications) is a CRS4 training initiative in the field of computational hydrology and grid computing (Jan-Sept, 2006). The personnel involved were Fabrizio Murgia (trainee) and Giuditta Lecca (tutor). The objectives of the project were: " To aquire specialized skills about grid computing with special emphasis on computational sub-surface hydrology; " To develop and test software procedures to run Monte Carlo simulations on the EGEE production grid; " To produce a technical report and some seminars about grid computing. The aquired competences and skills will be used in the ongoing projects GRIDA3, CyberSAR and DEGREE

    A survey of general-purpose experiment management tools for distributed systems

    Get PDF
    International audienceIn the field of large-scale distributed systems, experimentation is particularly difficult. The studied systems are complex, often nondeterministic and unreliable, software is plagued with bugs, whereas the experiment workflows are unclear and hard to reproduce. These obstacles led many independent researchers to design tools to control their experiments, boost productivity and improve quality of scientific results. Despite much research in the domain of distributed systems experiment management, the current fragmentation of efforts asks for a general analysis. We therefore propose to build a framework to uncover missing functionality of these tools, enable meaningful comparisons be-tween them and find recommendations for future improvements and research. The contribution in this paper is twofold. First, we provide an extensive list of features offered by general-purpose experiment management tools dedicated to distributed systems research on real platforms. We then use it to assess existing solutions and compare them, outlining possible future paths for improvements

    A Globally Distributed System for Job, Data, and Information Handling for High Energy Physics

    Full text link
    corecore