1,051 research outputs found

    J-PET Framework: Software platform for PET tomography data reconstruction and analysis

    Get PDF
    J-PET Framework is an open-source software platform for data analysis, written in C++ and based on the ROOT package. It provides a common environment for implementation of reconstruction, calibration and filtering procedures, as well as for user-level analyses of Positron Emission Tomography data. The library contains a set of building blocks that can be combined by users with even little programming experience, into chains of processing tasks through a convenient, simple and well-documented API. The generic input-output interface allows processing the data from various sources: low-level data from the tomography acquisition system or from diagnostic setups such as digital oscilloscopes, as well as high-level tomography structures e.g. sinograms or a list of lines-of-response. Moreover, the environment can be interfaced with Monte Carlo simulation packages such as GEANT and GATE, which are commonly used in the medical scientific community.Comment: 14 pages, 5 figure

    ImageJ2: ImageJ for the next generation of scientific image data

    Full text link
    ImageJ is an image analysis program extensively used in the biological sciences and beyond. Due to its ease of use, recordable macro language, and extensible plug-in architecture, ImageJ enjoys contributions from non-programmers, amateur programmers, and professional developers alike. Enabling such a diversity of contributors has resulted in a large community that spans the biological and physical sciences. However, a rapidly growing user base, diverging plugin suites, and technical limitations have revealed a clear need for a concerted software engineering effort to support emerging imaging paradigms, to ensure the software's ability to handle the requirements of modern science. Due to these new and emerging challenges in scientific imaging, ImageJ is at a critical development crossroads. We present ImageJ2, a total redesign of ImageJ offering a host of new functionality. It separates concerns, fully decoupling the data model from the user interface. It emphasizes integration with external applications to maximize interoperability. Its robust new plugin framework allows everything from image formats, to scripting languages, to visualization to be extended by the community. The redesigned data model supports arbitrarily large, N-dimensional datasets, which are increasingly common in modern image acquisition. Despite the scope of these changes, backwards compatibility is maintained such that this new functionality can be seamlessly integrated with the classic ImageJ interface, allowing users and developers to migrate to these new methods at their own pace. ImageJ2 provides a framework engineered for flexibility, intended to support these requirements as well as accommodate future needs

    The space physics environment data analysis system (SPEDAS)

    Get PDF
    With the advent of the Heliophysics/Geospace System Observatory (H/GSO), a complement of multi-spacecraft missions and ground-based observatories to study the space environment, data retrieval, analysis, and visualization of space physics data can be daunting. The Space Physics Environment Data Analysis System (SPEDAS), a grass-roots software development platform (www.spedas.org), is now officially supported by NASA Heliophysics as part of its data environment infrastructure. It serves more than a dozen space missions and ground observatories and can integrate the full complement of past and upcoming space physics missions with minimal resources, following clear, simple, and well-proven guidelines. Free, modular and configurable to the needs of individual missions, it works in both command-line (ideal for experienced users) and Graphical User Interface (GUI) mode (reducing the learning curve for first-time users). Both options have “crib-sheets,” user-command sequences in ASCII format that can facilitate record-and-repeat actions, especially for complex operations and plotting. Crib-sheets enhance scientific interactions, as users can move rapidly and accurately from exchanges of technical information on data processing to efficient discussions regarding data interpretation and science. SPEDAS can readily query and ingest all International Solar Terrestrial Physics (ISTP)-compatible products from the Space Physics Data Facility (SPDF), enabling access to a vast collection of historic and current mission data. The planned incorporation of Heliophysics Application Programmer’s Interface (HAPI) standards will facilitate data ingestion from distributed datasets that adhere to these standards. Although SPEDAS is currently Interactive Data Language (IDL)-based (and interfaces to Java-based tools such as Autoplot), efforts are under-way to expand it further to work with python (first as an interface tool and potentially even receiving an under-the-hood replacement). We review the SPEDAS development history, goals, and current implementation. We explain its “modes of use” with examples geared for users and outline its technical implementation and requirements with software developers in mind. We also describe SPEDAS personnel and software management, interfaces with other organizations, resources and support structure available to the community, and future development plans.Published versio

    Concurrent Image Processing Executive (CIPE). Volume 1: Design overview

    Get PDF
    The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities

    The 11th Conference of PhD Students in Computer Science

    Get PDF

    Image processing platform for the analysis of brain vascular patterns

    Get PDF
    Aquest projecte consisteix en el desenvolupament d'una aplicació web per al suport metge de l'anàlisi d'imatges cerebrovasculars. L'objectiu és crear un prototip obert i modular que serveixi com a exemple i plantilla per al desenvolupament d'altres projectes. L'objectiu és aconseguir una alternativa a les opcions comercials actualment existents d'eines d'anàlisi de dades en la indústria de la salut. L'aplicació es desenvolupa utilitzant el llenguatge Python. L'aplicació permet a l'usuari carregar imatges mèdiques contingudes en fitxers DICOM, aquestes imatges són processades per eliminar el soroll i extreure els vasos sanguinis de la imatge de cara a l'anàlisi. Els resultats es resumeixen en tres gràfics: un anomenat mapa isocronal que reflecteix l'evolució temporal de el flux de la sang, un altre gràfic mostrant l'esquelet de l'estructura o xarxa de sistema vascular, i un últim gràfic que representa dades numèriques extretes com a paràmetres de l'anàlisi de l'esquelet. El framework Dash és usat per implementar la interfície i la interacció amb l'usuari. L'usuari pot carregar dues mostres diferents a el mateix temps i executar una anàlisi per comparar els resultats de les dues mostres en una mateixa pantalla. Finalment l'aplicació s'empaqueta en un contenidor virtual usant la plataforma Docker. Després de provar l'aplicació amb imatges reals de mostra proporcionades per l'Hospital Sant Joan de Déu, els resultats obtinguts són satisfactoris ja que l'aplicació funciona adequadament així com els algoritmes de processat d'imatge aplicats. Malgrat les limitacions de el projecte, el treball realitzat pot servir com a punt de partida per a futurs desenvolupaments.Este proyecto consiste en el desarrollo de una aplicación web para el soporte médico del análisis de imágenes cerebrovasculares. El objetivo es crear un prototipo abierto y modular que sirva como ejemplo y plantilla para el desarrollo de otros proyectos. El objetivo es conseguir una alternativa a las opciones comerciales actualmente existentes de herramientas de análisis de datos en la industria de la salud. La aplicación se desarrolla usando el lenguaje Python. La aplicación permite al usuario cargar imágenes médicas contenidas en ficheros DICOM, esas imágenes son procesadas para eliminar el ruido y extraer los vasos sanguíneos de la imagen de cara al análisis. Los resultados se resumen en tres gráficos: uno llamado mapa isocronal que refleja la evolución temporal del flujo de la sangre, otro gráfico mostrando el esqueleto de la estructura o red del sistema vascular, y un último gráfico que representa datos numéricos extraídos como parámetros del análisis del esqueleto. El framework Dash es usado para implementar la interfaz y la interacción con el usuario. El usuario puede cargar dos muestras diferentes al mismo tiempo y ejecutar un análisis para comparar los resultados de las dos muestras en una misma pantalla. Finalmente la aplicación se empaqueta en un contenedor virtual usando la plataforma Docker. Tras probar la aplicación con imágenes reales de muestra proporcionadas por el Hospital Sant Joan de Déu, los resultados obtenidos son satisfactorios ya que la aplicación funciona adecuadamente así como los algoritmos de procesado de imagen aplicados. Pese a las limitaciones del proyecto, el trabajo realizado puede servir como punto de partida para futuros desarrollos.This project consists in the development of a web application for the support of medical professionals in the analysis of cerebrovascular image data. The objective is to build an open and modular prototype that can serve as an example or template for the development of other projects. The purpose is to have an open alternative to the commercial options currently available for data analysis tools in the health industry market. The application is developed using Python. The application allows the user to load medical images contained in DICOM files, those images are processed for noise removal and binarization in order to build the result graphs. The results are three graphs: an image graph called “isochronal map” reflecting the temporal evolution of the blood flow, an image graph showing the skeleton of the vascular system structure, a box-plot graph representing the numerical branch data extracted from the skeleton. The Dash framework is used to construct the user interface and to implement the user interaction functionalities. The subject can load two different samples at the same time and execute the analysis to compare the results for both samples in the same screen. Finally the application is containerized using Docker to package it and make it multi-platform. The app is tested and the results are satisfactory as the resulting application works properly and so do the image processing algorithms for the input data provided by the Hospital Sant Joan de Déu. Despite its obvious limitations, the work done serves as a starting point for future developments

    Exploranative Code Quality Documents

    Full text link
    Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope.Comment: IEEE VIS VAST 201

    The 10th Jubilee Conference of PhD Students in Computer Science

    Get PDF
    corecore