729 research outputs found

    High performance cloud computing on multicore computers

    Get PDF
    The cloud has become a major computing platform, with virtualization being a key to allow applications to run and share the resources in the cloud. A wide spectrum of applications need to process large amounts of data at high speeds in the cloud, e.g., analyzing customer data to find out purchase behavior, processing location data to determine geographical trends, or mining social media data to assess brand sentiment. To achieve high performance, these applications create and use multiple threads running on multicore processors. However, existing virtualization technology cannot support the efficient execution of such applications on virtual machines, making them suffer poor and unstable performance in the cloud. Targeting multi-threaded applications, the dissertation analyzes and diagnoses their performance issues on virtual machines, and designs practical solutions to improve their performance. The dissertation makes the following contributions. First, the dissertation conducts extensive experiments with standard multicore applications, in order to evaluate the performance overhead on virtualization systems and diagnose the causing factors. Second, focusing on one main source of the performance overhead, excessive spinning, the dissertation designs and evaluates a holistic solution to make effective utilization of the hardware virtualization support in processors to reduce excessive spinning with low cost. Third, focusing on application scalability, which is the most important performance feature for multi-threaded applications, the dissertation models application scalability in virtual machines and analyzes how application scalability changes with virtualization and resource sharing. Based on the modeling and analysis, the dissertation identifies key application features and system factors that have impacts on application scalability, and reveals possible approaches for improving scalability. Forth, the dissertation explores one approach to improving application scalability by making fully utilization of virtual resources of each virtual machine. The general idea is to match the workload distribution among the virtual CPUs in a virtual machine and the virtual CPU resource of the virtual machine manager

    Manipulation of Online Reviews: Analysis of Negative Reviews for Healthcare Providers

    Get PDF
    There is a growing reliance on online reviews in today’s digital world. As the influence of online reviews amplified in the competitive marketplace, so did the manipulation of reviews and evolution of fake reviews on these platforms. Like other consumer-oriented businesses, the healthcare industry has also succumbed to this phenomenon. However, health issues are much more personal, sensitive, complicated in nature requiring knowledge of medical terminologies and often coupled with myriad of interdependencies. In this study, we collated the literature on manipulation of online reviews, identified the gaps and proposed an approach, including validation of negative reviews of the 500 doctors from three different states: New York and Arizona in USA and New South Wales in Australia from the RateMDs website. The reviews of doctors was collected, which includes both numerical star ratings (1-low to 5-high) and textual feedback/comments. Compared to other existing research, this study will analyse the textual feedback which corresponds to the clinical quality of doctors (helpfulness and knowledge criteria) rather than process quality experiences. Our study will explore pathways to validate the negative reviews for platform provider and rank the doctors accordingly to minimise the risks in healthcare

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    Dynamic Binary Translation for Embedded Systems with Scratchpad Memory

    Get PDF
    Embedded software development has recently changed with advances in computing. Rather than fully co-designing software and hardware to perform a relatively simple task, nowadays embedded and mobile devices are designed as a platform where multiple applications can be run, new applications can be added, and existing applications can be updated. In this scenario, traditional constraints in embedded systems design (i.e., performance, memory and energy consumption and real-time guarantees) are more difficult to address. New concerns (e.g., security) have become important and increase software complexity as well. In general-purpose systems, Dynamic Binary Translation (DBT) has been used to address these issues with services such as Just-In-Time (JIT) compilation, dynamic optimization, virtualization, power management and code security. In embedded systems, however, DBT is not usually employed due to performance, memory and power overhead. This dissertation presents StrataX, a low-overhead DBT framework for embedded systems. StrataX addresses the challenges faced by DBT in embedded systems using novel techniques. To reduce DBT overhead, StrataX loads code from NAND-Flash storage and translates it into a Scratchpad Memory (SPM), a software-managed on-chip SRAM with limited capacity. SPM has similar access latency as a hardware cache, but consumes less power and chip area. StrataX manages SPM as a software instruction cache, and employs victim compression and pinning to reduce retranslation cost and capture frequently executed code in the SPM. To prevent performance loss due to excessive code expansion, StrataX minimizes the amount of code inserted by DBT to maintain control of program execution. When a hardware instruction cache is available, StrataX dynamically partitions translated code among the SPM and main memory. With these techniques, StrataX has low performance overhead relative to native execution for MiBench programs. Further, it simplifies embedded software and hardware design by operating transparently to applications without any special hardware support. StrataX achieves sufficiently low overhead to make it feasible to use DBT in embedded systems to address important design goals and requirements

    The words of the body: psychophysiological patterns in dissociative narratives

    Get PDF
    Trauma has severe consequences on both psychological and somatic levels, even affecting the genetic expression and the cell\u2019s DNA repair ability. A key mechanism in the understanding of clinical disorders deriving from trauma is identified in dissociation, as a primitive defense against the fragmentation of the self originated by overwhelming experiences. The dysregulation of the interpersonal patterns due to the traumatic experience and its detrimental effects on the body are supported by influent neuroscientific models such as Damasio\u2019s somatic markers and Porges\u2019 polyvagal theory. On the basis of these premises, and supported by our previous empirical observations on 40 simulated clinical sessions, we will discuss the longitudinal process of a brief psychodynamic psychotherapy (16 sessions, weekly frequency) with a patient who suffered a relational trauma. The research design consists of the collection of self-report and projective tests, pre-post therapy and after each clinical session, in order to assess personality, empathy, clinical alliance and clinical progress, along with the verbatim analysis of the transcripts trough the Psychotherapy Process Q-Set and the Collaborative Interactions Scale. Furthermore, we collected simultaneous psychophysiological measures of the therapeutic dyad: skin conductance and hearth rate. Lastly, we employed a computerized analysis of non-verbal behaviors to assess synchrony in posture and gestures. These automated measures are able to highlight moments of affective concordance and discordance, allowing for a deep understanding of the mutual regulations between the patient and the therapist. Preliminary results showed that psychophysiological changes in dyadic synchrony, observed in body movements, skin conductance and hearth rate, occurred within sessions during the discussion of traumatic experiences, with levels of attunement that changed in both therapist and the patient depending on the quality of the emotional representation of the experience. These results go in the direction of understanding the relational process in trauma therapy, using an integrative language in which both clinical and neurophysiological knowledge may take advantage of each other

    The holistic perspective of the INCISIVE Project: artificial intelligence in screening mammography

    Get PDF
    Finding new ways to cost-effectively facilitate population screening and improve cancer diagnoses at an early stage supported by data-driven AI models provides unprecedented opportunities to reduce cancer related mortality. This work presents the INCISIVE project initiative towards enhancing AI solutions for health imaging by unifying, harmonizing, and securely sharing scattered cancer-related data to ensure large datasets which are critically needed to develop and evaluate trustworthy AI models. The adopted solutions of the INCISIVE project have been outlined in terms of data collection, harmonization, data sharing, and federated data storage in compliance with legal, ethical, and FAIR principles. Experiences and examples feature breast cancer data integration and mammography collection, indicating the current progress, challenges, and future directions.This research received funding mainly from the European Union’s Horizon 2020 research and innovation program under grant agreement no 952179. It was also partially funded by the Ministry of Economy, Industry, and Competitiveness of Spain under contracts PID2019-107255GB and 2017-SGR-1414.Peer ReviewedArticle signat per 30 autors/es: Ivan Lazic (1), Ferran Agullo (2), Susanna Ausso (3), Bruno Alves (4), Caroline Barelle (4), Josep Ll. Berral (2), Paschalis Bizopoulos (5), Oana Bunduc (6), Ioanna Chouvarda (7), Didier Dominguez (3), Dimitrios Filos (7), Alberto Gutierrez-Torre (2), Iman Hesso (8), Nikša Jakovljević (1), Reem Kayyali (8), Magdalena Kogut-Czarkowska (9), Alexandra Kosvyra (7), Antonios Lalas (5) , Maria Lavdaniti (10,11), Tatjana Loncar-Turukalo (1),Sara Martinez-Alabart (3), Nassos Michas (4,12), Shereen Nabhani-Gebara (8), Andreas Raptopoulos (6), Yiannis Roussakis (13), Evangelia Stalika (7,11), Chrysostomos Symvoulidis (6,14), Olga Tsave (7), Konstantinos Votis (5) Andreas Charalambous (15) / (1) Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia; (2) Barcelona Supercomputing Center, 08034 Barcelona, Spain; (3) Fundació TIC Salut Social, Ministry of Health of Catalonia, 08005 Barcelona, Spain; (4) European Dynamics, 1466 Luxembourg, Luxembourg; (5) Centre for Research and Technology Hellas, 57001 Thessaloniki, Greece; (6) Telesto IoT Solutions, London N7 7PX, UK: (7) School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece; (8) Department of Pharmacy, Kingston University London, London KT1 2EE, UK; (9) Timelex BV/SRL, 1000 Brussels, Belgium; (10) Nursing Department, International Hellenic University, 57400 Thessaloniki, Greece; (11) Hellenic Cancer Society, 11521 Athens, Greece; (12) European Dynamics, 15124 Athens, Greece; (13) German Oncology Center, Department of Medical Physics, Limassol 4108, Cyprus; (14) Department of Digital Systems, University of Piraeus, 18534 Piraeus, Greece; (15) Department of Nursing, Cyprus University of Technology, Limassol 3036, CyprusPostprint (published version

    Polymorphic computing abstraction for heterogeneous architectures

    Get PDF
    Integration of multiple computing paradigms onto system on chip (SoC) has pushed the boundaries of design space exploration for hardware architectures and computing system software stack. The heterogeneity of computing styles in SoC has created a new class of architectures referred to as Heterogeneous Architectures. Novel applications developed to exploit the different computing styles are user centric for embedded SoC. Software and hardware designers are faced with several challenges to harness the full potential of heterogeneous architectures. Applications have to execute on more than one compute style to increase overall SoC resource utilization. The implication of such an abstraction is that application threads need to be polymorphic. Operating system layer is thus faced with the problem of scheduling polymorphic threads. Resource allocation is also an important problem to be dealt by the OS. Morphism evolution of application threads is constrained by the availability of heterogeneous computing resources. Traditional design optimization goals such as computational power and lower energy per computation are inadequate to satisfy user centric application resource needs. Resource allocation decisions at application layer need to permeate to the architectural layer to avoid conflicting demands which may affect energy-delay characteristics of application threads. We propose Polymorphic computing abstraction as a unified computing model for heterogeneous architectures to address the above issues. Simulation environment for polymorphic applications is developed and evaluated under various scheduling strategies to determine the effectiveness of polymorphism abstraction on resource allocation. User satisfaction model is also developed to complement polymorphism and used for optimization of resource utilization at application and network layer of embedded systems
    corecore