11 research outputs found

    The BIDS Toolbox: A web service to manage brain imaging datasets

    Get PDF
    Data sharing is a key factor for ensuring reproducibilityand transparency of scientific experiments, andneuroimaging is no exception. The vast heterogeneity of dataformats and imaging modalities utilised in the field makes it avery challenging problem. In this context, the Brain ImagingData Structure (BIDS) appears as a solution for organising anddescribing neuroimaging datasets. Since its publication in 2015,BIDS has gained widespread attention in the field, as it providesa common way to arrange and share multimodal brain images.Although the evident benefits it presents, BIDS has not beenwidely adopted in the field of MRI yet and we believe that thisis due to the lack of a go-to tool to create and managed BIDSdatasets. Motivated by this, we present the BIDS Toolbox, aweb service to manage brain imaging datasets in BIDS format.Different from other tools, the BIDS Toolbox allows the creationand modification of BIDS-compliant datasets based on MRI data.It provides both a web interface and REST endpoints for its use.In this paper we describe its design and early prototype, andprovide a link to the public source code repository

    Scalability of an Eulerian-Lagrangian large-eddy simulation solver with hybrid MPI/OpenMP parallelisation

    Get PDF
    Eulerian-Lagrangian approaches capable of accurately reproducing complex fluid flows are becoming more and more popular due to the increasing availability and capacity of High Performance Computing facilities. However, the parallelisation of the Lagrangian part of such methods is challenging when a large number of Lagrangian markers are employed. In this study, a hybrid MPI/OpenMP parallelisation strategy is presented and implemented in a finite difference based large-eddy simulation code featuring the immersed boundary method which generally employs a large number of Lagrangian markers. A master-scattering-gathering strategy is used to deal with the handling of the Lagrangian markers and OpenMP is employed to distribute their computational load across several CPU threads. A classical domain-decomposition-based MPI approach is used to carry out the Eulerian, fixed-mesh fluid calculations. The results demonstrate that by using an effective combination of MPI and OpenMP the code can outperform a pure MPI parallelisation approach by up to 20%. Outcomes from this paper are of interest to various Eulerian-Lagrangian applications including the immersed boundary method, discrete element method or Lagrangian particle tracking

    Designing a Human Computation Framework to Enhance Citizen-Government Interaction

    Get PDF
    Human computation or Human-based computation (HBC) is a paradigm that considers the design and analysis of information processing systems in which humans participate as computational agents performing small tasks and being orchestrated by a computer system. In particular, humans perform small pieces of work and a computer system is in charge of orchestrating their results. In this work, we want to exploit this potential to improve the take-up of e-service United States of Americage by citizens interacting with governments. To that end, we propose Citizenpedia, a human computation framework aimed at fostering citizen's involvement in the public administration. Citizenpedia is presented as a web application with two main components: the Question Answering Engine, where citizens and civil servants can post and solve doubts about e-services and public administration, and the Collaborative Procedure Designer, where citizens can collaborate with civil servants in the definition and improvement of new administrative procedures and e-services. In this work, we present the design and prototype of Citizenpedia and two evaluation studies conducted: the first one, a set of on-line surveys about the component's design, and the second one, a face-to-face user evaluation of the prototype. These evaluations showed us that the participants of the tests found the platform attractive, and pointed out several improvement suggestions regarding user experience of e-services

    Kernel density estimation in accelerators: implementation and performance evaluation

    No full text
    Kernel density estimation (KDE) is a popular technique used to estimate the probability density function of a random variable. KDE is considered a fundamental data smoothing algorithm, and it is a common building block in many scientific applications. In a previous work we presented S-KDE, an efficient algorithmic approach to compute KDE that outperformed other state-of-the-art implementations, providing accurate results in much reduced execution times. Its parallel implementation targeted multi- and many-core processors. In this work we present an OpenCL implementation of S-KDE, targeting modern accelerators in a portable way. We test our implementation on three accelerators from different manufacturers, achieving speedups around 5脳 compared to a hand-tuned serial version of S-KDE. We also analyze the performance of the code in these accelerators, to find out to what extent our code exploits their capabilities

    A survey of performance modeling and simulation techniques for accelerator-based computing

    No full text
    The high performance computing landscape is shifting from collections of homogeneous nodes towards heterogeneous systems, in which nodes consist of a combination of traditional out-of-order execution cores and accelerator devices. Accelerators, built around GPUs, many-core chips, FPGAs or DSPs, are used to offload compute-intensive tasks. The advent of this type of systems has brought about a wide and diverse ecosystem of development platforms, optimization tools and performance analysis frameworks. This is a review of the state-of-the-art in performance tools for heterogeneous computing, focusing on the most popular families of accelerators: GPUs and Intel's Xeon Phi. We describe current heterogeneous systems and the development frameworks and tools that can be used for developing for them. The core of this survey is a review of the performance models and tools, including simulators, proposed in the literature for these platforms

    An efficient implementation of kernel density estimation for multi-core and many-core architectures

    No full text
    Kernel density estimation (KDE) is a statistical technique used to estimate the probability density function of a sample set with unknown density function. It is considered a fundamental data-smoothing problem for use with large datasets, and is widely applied in areas such as climatology and biometry. Due to the large volumes of data that these problems usually process, KDE is a computationally challenging problem. Current HPC platforms with built-in accelerators have an enormous computing power, but they have to be programmed efficiently in order to take advantage of that power. We have developed a novel strategy to compute KDE using bounded kernels, trying to minimize memory accesses, and implemented it as a parallel program targeting multi-core and many-core processors. The efficiency of our code has been tested with different datasets, obtaining impressive levels of acceleration when taking as reference alternative, state-of-the-art KDE implementations

    Overcrowding detection in indoor events using scalable technologies

    No full text
    The increase in the number of large-scale events held indoors (i.e., conferences and business events) opens new opportunities for crowd monitoring and access controlling as a way to prevent risks and provide further information about the event鈥檚 development. In addition, the availability of already connectable devices among attendees allows to perform non-intrusive positioning during the event, without the need of specific tracking devices. We present an algorithm for overcrowding detection based on passive Wi-Fi requests capture and a platform for event monitoring that integrates this algorithm. The platform offers access control management, attendees monitoring and the analysis and visualization of the captured information, using a scalable software architecture. In this paper, we evaluate the algorithm in two ways: First, we test its accuracy with data captured in a real event, and then we analyze the scalability of the code in a multi-core Apache Spark-based environment. The experiments show that the algorithm provides accurate results with the captured data, and that the code scales properly

    Assessing Industrial Communication Protocols to Bridge the Gap between Machine Tools and Software Monitoring

    Get PDF
    Industrial communication protocols are protocols used to interconnect systems, interfaces, and machines in industrial environments. With the advent of hyper-connected factories, the role of these protocols is gaining relevance, as they enable the real-time acquisition of machine monitoring data, which can fuel real-time data analysis platforms that conduct tasks such as predictive maintenance. However, the effectiveness of these protocols is largely unknown and there is a lack of empirical evaluation which compares their performance. In this work, we evaluate OPC-UA, Modbus, and Ethernet/IP with three machine tools to assess their performance and their complexity of use from a software perspective. Our results show that Modbus provides the best latency figures and communication has different complexities depending on the used protocol, from the software perspective
    corecore