620 research outputs found

    HIGH PERFORMANCE MODELLING AND COMPUTING IN COMPLEX MEDICAL CONDITIONS: REALISTIC CEREBELLUM SIMULATION AND REAL-TIME BRAIN CANCER DETECTION

    Get PDF
    The personalized medicine is the medicine of the future. This innovation is supported by the ongoing technological development that will be crucial in this field. Several areas in the healthcare research require performant technological systems, which elaborate huge amount of data in real-time. By exploiting the High Performance Computing technologies, scientists want to reach the goal of developing accurate diagnosis and personalized therapies. To reach these goals three main activities have to be investigated: managing a great amount of data acquisition and analysis, designing computational models to simulate the patient clinical status, and developing medical support systems to provide fast decisions during diagnosis or therapies. These three aspects are strongly supported by technological systems that could appear disconnected. However, in this new medicine, they will be in some way connected. As far as the data are concerned, today people are immersed in technology, producing a huge amount of heterogeneous data. Part of these is characterized by a great medical potential that could facilitate the delineation of the patient health condition and could be integrated in our medical record facilitating clinical decisions. To ensure this process technological systems able to organize, analyse and share these information are needed. Furthermore, they should guarantee a fast data usability. In this contest HPC and, in particular, the multicore and manycore processors, will surely have a high importance since they are capable to spread the computational workload on different cores to reduce the elaboration times. These solutions are crucial also in the computational modelling, field where several research groups aim to implement models able to realistically reproduce the human organs behaviour to develop their simulators. They are called digital twins and allow to reproduce the organ activity of a specific patient to study the disease progression or a new therapy. Patient data will be the inputs of these models which will predict her/his condition, avoiding invasive and expensive exams. The technological support that a realistic organ simulator requires is significant from the computational point of view. For this reason, devices as GPUs, FPGAs, multicore devices or even supercomputers are needed. As an example in this field, the development of a cerebellar simulator exploiting HPC will be described in the second chapter of this work. The complexity of the realistic mathematical models used will justify such a technological choice to reach reduced elaboration times. This work is within the Human Brain Project that aims to run a complete realistic simulation of the human brain. Finally, these technologies have a crucial role in the medical support system development. Most of the times during surgeries, it is very important that a support system provides a real-time answer. Moreover, the fact that this answer is the result of the elaboration of a complex mathematical problem, makes HPC system essential also in this field. If environments such as surgeries are considered, it is more plausible that the computation is performed by local desktop systems able to elaborate the data directly acquired during the surgery. The third chapter of this thesis describes the development of a brain cancer detection system, exploiting GPUs. This support system, developed as part of the HELICoiD project, performs a real-time elaboration of the brain hyperspectral images, acquired during surgery, to provide a classification map which highlights the tumor. The neurosurgeon is facilitated in the tissue resection. In this field, the GPU has been crucial to provide a real-time elaboration. Finally, it is possible to assert that in most of the fields of the personalized medicine, HPC will have a crucial role since they consist in the elaboration of a great amount of data in reduced times, aiming to provide specific diagnosis and therapies for the patient

    EDEN: A high-performance, general-purpose, NeuroML-based neural simulator

    Get PDF
    Modern neuroscience employs in silico experimentation on ever-increasing and more detailed neural networks. The high modelling detail goes hand in hand with the need for high model reproducibility, reusability and transparency. Besides, the size of the models and the long timescales under study mandate the use of a simulation system with high computational performance, so as to provide an acceptable time to result. In this work, we present EDEN (Extensible Dynamics Engine for Networks), a new general-purpose, NeuroML-based neural simulator that achieves both high model flexibility and high computational performance, through an innovative model-analysis and code-generation technique. The simulator runs NeuroML v2 models directly, eliminating the need for users to learn yet another simulator-specific, model-specification language. EDEN's functional correctness and computational performance were assessed through NeuroML models available on the NeuroML-DB and Open Source Brain model repositories. In qualitative experiments, the results produced by EDEN were verified against the established NEURON simulator, for a wide range of models. At the same time, computational-performance benchmarks reveal that EDEN runs up to 2 orders-of-magnitude faster than NEURON on a typical desktop computer, and does so without additional effort from the user. Finally, and without added user effort, EDEN has been built from scratch to scale seamlessly over multiple CPUs and across computer clusters, when available.Comment: 29 pages, 9 figure

    A virtual machine for event sequence identification using fuzzy tolerance

    Get PDF

    MineSweeper: A “Clean Sweep” for Drop-In Use-After-Free Prevention

    Get PDF

    An Analytical Model of Hardware Transactional Memory

    Get PDF
    This paper investigates the problem of deriving a white box performance model of Hardware Transactional Memory (HTM) systems. The proposed model targets TSX, a popular implementation of HTM integrated in Intel processors starting with the Haswell family in 2013. An inherent difficulty with building white-box models of commercially available HTM systems is that their internals are either vaguely documented or undisclosed by their manufacturers. We tackle this challenge by designing a set of experiments that allow us to shed lights on the internal mechanisms used in TSX to manage conflicts among transactions and to track their readsets and writesets. We exploit the information inferred from this experimental study to build an analytical model of TSX focused on capturing the impact on performance of two key mechanisms: the concurrency control scheme and the management of transactional meta-data in the processor's caches. We validate the proposed model by means of an extensive experimental study encompassing a broad range of workloads executed on a real system
    • 

    corecore