77 research outputs found

    iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

    Get PDF
    The advancement of the computational biology field hinges on progress in three fundamental directions – the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources–data, software tools and web-services. The iTools design, implementation and resource meta - data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu

    funcX: A Federated Function Serving Fabric for Science

    Full text link
    Exploding data volumes and velocities, new computational methods and platforms, and ubiquitous connectivity demand new approaches to computation in the sciences. These new approaches must enable computation to be mobile, so that, for example, it can occur near data, be triggered by events (e.g., arrival of new data), be offloaded to specialized accelerators, or run remotely where resources are available. They also require new design approaches in which monolithic applications can be decomposed into smaller components, that may in turn be executed separately and on the most suitable resources. To address these needs we present funcX---a distributed function as a service (FaaS) platform that enables flexible, scalable, and high performance remote function execution. funcX's endpoint software can transform existing clouds, clusters, and supercomputers into function serving systems, while funcX's cloud-hosted service provides transparent, secure, and reliable function execution across a federated ecosystem of endpoints. We motivate the need for funcX with several scientific case studies, present our prototype design and implementation, show optimizations that deliver throughput in excess of 1 million functions per second, and demonstrate, via experiments on two supercomputers, that funcX can scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap with arXiv:1908.0490

    A cloud infrastructure for scalable computing on population imaging databanks

    Full text link
    This article describes the software architecture designed to cope with the computing demand of research usage of complex data from the imaging biobank of the Regional Ministry of Health in the Valencia Region (CS). It proposes the use of self-configured virtual clusters on top of on-premise and public cloud infrastructures. It uses a model based on recipes and autoconfiguration to deploy virtual elastic clusters that adjust themselves to the actual workload of the study, therefore reducing operating costs and preventing the need of up-front investments both at the level of the imaging biobank or the final user. All the software used is released under open-source licenses.Blanquer Espert, I.; Caballer Fernández, M.; Martí-Bonmatí, L.; Alberich Bayarri, A.; De La Iglesia Vayá, MDLD.; Martínez, J. (2015). A cloud infrastructure for scalable computing on population imaging databanks. International Journal of Image Mining. 1(2/3):175-187. doi:10.1504/IJIM.2015.073015S17518712/

    A Step Change for Earth System Research: Future Earth – Research for Global Sustainability

    Get PDF
    More integration between scientific disciplines and between the scientific, development and policy communities have been called for by nations and organisations around the world to address the mounting challenge of a transition to sustainability in general and sustainable development in par-ticular

    Non-Imaging Medical Data Synthesis for Trustworthy AI: A Comprehensive Survey

    Full text link
    Data quality is the key factor for the development of trustworthy AI in healthcare. A large volume of curated datasets with controlled confounding factors can help improve the accuracy, robustness and privacy of downstream AI algorithms. However, access to good quality datasets is limited by the technical difficulty of data acquisition and large-scale sharing of healthcare data is hindered by strict ethical restrictions. Data synthesis algorithms, which generate data with a similar distribution as real clinical data, can serve as a potential solution to address the scarcity of good quality data during the development of trustworthy AI. However, state-of-the-art data synthesis algorithms, especially deep learning algorithms, focus more on imaging data while neglecting the synthesis of non-imaging healthcare data, including clinical measurements, medical signals and waveforms, and electronic healthcare records (EHRs). Thus, in this paper, we will review the synthesis algorithms, particularly for non-imaging medical data, with the aim of providing trustworthy AI in this domain. This tutorial-styled review paper will provide comprehensive descriptions of non-imaging medical data synthesis on aspects including algorithms, evaluations, limitations and future research directions.Comment: 35 pages, Submitted to ACM Computing Survey

    A Neuroimaging Web Interface for Data Acquisition, Processing and Visualization of Multimodal Brain Images

    Get PDF
    Structural and functional brain images are generated as essential modalities for medical experts to learn about the different functions of the brain. These images are typically visually inspected by experts. Many software packages are available to process medical images, but they are complex and difficult to use. The software packages are also hardware intensive. As a consequence, this dissertation proposes a novel Neuroimaging Web Services Interface (NWSI) as a series of processing pipelines for a common platform to store, process, visualize and share data. The NWSI system is made up of password-protected interconnected servers accessible through a web interface. The web-interface driving the NWSI is based on Drupal, a popular open source content management system. Drupal provides a user-based platform, in which the core code for the security and design tools are updated and patched frequently. New features can be added via modules, while maintaining the core software secure and intact. The webserver architecture allows for the visualization of results and the downloading of tabulated data. Several forms are ix available to capture clinical data. The processing pipeline starts with a FreeSurfer (FS) reconstruction of T1-weighted MRI images. Subsequently, PET, DTI, and fMRI images can be uploaded. The Webserver captures uploaded images and performs essential functionalities, while processing occurs in supporting servers. The computational platform is responsive and scalable. The current pipeline for PET processing calculates all regional Standardized Uptake Value ratios (SUVRs). The FS and SUVR calculations have been validated using Alzheimer\u27s Disease Neuroimaging Initiative (ADNI) results posted at Laboratory of Neuro Imaging (LONI). The NWSI system provides access to a calibration process through the centiloid scale, consolidating Florbetapir and Florbetaben tracers in amyloid PET images. The interface also offers onsite access to machine learning algorithms, and introduces new heat maps that augment expert visual rating of PET images. NWSI has been piloted using data and expertise from Mount Sinai Medical Center, the 1Florida Alzheimer’s Disease Research Center (ADRC), Baptist Health South Florida, Nicklaus Children\u27s Hospital, and the University of Miami. All results were obtained using our processing servers in order to maintain data validity, consistency, and minimal processing bias

    Efficient Implementation of Stochastic Inference on Heterogeneous Clusters and Spiking Neural Networks

    Get PDF
    Neuromorphic computing refers to brain inspired algorithms and architectures. This paradigm of computing can solve complex problems which were not possible with traditional computing methods. This is because such implementations learn to identify the required features and classify them based on its training, akin to how brains function. This task involves performing computation on large quantities of data. With this inspiration, a comprehensive multi-pronged approach is employed to study and efficiently implement neuromorphic inference model using heterogeneous clusters to address the problem using traditional Von Neumann architectures and by developing spiking neural networks (SNN) for native and ultra-low power implementation. In this regard, an extendable high-performance computing (HPC) framework and optimizations are proposed for heterogeneous clusters to modularize complex neuromorphic applications in a distributed manner. To achieve best possible throughput and load balancing for such modularized architectures a set of algorithms are proposed to suggest the optimal mapping of different modules as an asynchronous pipeline to the available cluster resources while considering the complex data dependencies between stages. On the other hand, SNNs are more biologically plausible and can achieve ultra-low power implementation due to its sparse spike based communication, which is possible with emerging non-Von Neumann computing platforms. As a significant progress in this direction, spiking neuron models capable of distributed online learning are proposed. A high performance SNN simulator (SpNSim) is developed for simulation of large scale mixed neuron model networks. An accompanying digital hardware neuron RTL is also proposed for efficient real time implementation of SNNs capable of online learning. Finally, a methodology for mapping probabilistic graphical model to off-the-shelf neurosynaptic processor (IBM TrueNorth) as a stochastic SNN is presented with ultra-low power consumption
    corecore