27,421 research outputs found

    The organizational implications of medical imaging in the context of Malaysian hospitals

    Get PDF
    This research investigated the implementation and use of medical imaging in the context of Malaysian hospitals. In this report medical imaging refers to PACS, RIS/HIS and imaging modalities which are linked through a computer network. The study examined how the internal context of a hospital and its external context together influenced the implementation of medical imaging, and how this in turn shaped organizational roles and relationships within the hospital itself. It further investigated how the implementation of the technology in one hospital affected its implementation in another hospital. The research used systems theory as the theoretical framework for the study. Methodologically, the study used a case-based approach and multiple methods to obtain data. The case studies included two hospital-based radiology departments in Malaysia. The outcomes of the research suggest that the implementation of medical imaging in community hospitals is shaped by the external context particularly the role played by the Ministry of Health. Furthermore, influences from both the internal and external contexts have a substantial impact on the process of implementing medical imaging and the extent of the benefits that the organization can gain. In the context of roles and social relationships, the findings revealed that the routine use of medical imaging has substantially affected radiographers’ roles, and the social relationships between non clinical personnel and clinicians. This study found no change in the relationship between radiographers and radiologists. Finally, the approaches to implementation taken in the hospitals studied were found to influence those taken by other hospitals. Overall, this study makes three important contributions. Firstly, it extends Barley’s (1986, 1990) research by explicitly demonstrating that the organization’s internal and external contexts together shape the implementation and use of technology, that the processes of implementing and using technology impact upon roles, relationships and networks and that a role-based approach alone is inadequate to examine the outcomes of deploying an advanced technology. Secondly, this study contends that scalability of technology in the context of developing countries is not necessarily linear. Finally, this study offers practical contributions that can benefit healthcare organizations in Malaysia

    How can SMEs benefit from big data? Challenges and a path forward

    Get PDF
    Big data is big news, and large companies in all sectors are making significant advances in their customer relations, product selection and development and consequent profitability through using this valuable commodity. Small and medium enterprises (SMEs) have proved themselves to be slow adopters of the new technology of big data analytics and are in danger of being left behind. In Europe, SMEs are a vital part of the economy, and the challenges they encounter need to be addressed as a matter of urgency. This paper identifies barriers to SME uptake of big data analytics and recognises their complex challenge to all stakeholders, including national and international policy makers, IT, business management and data science communities. The paper proposes a big data maturity model for SMEs as a first step towards an SME roadmap to data analytics. It considers the ‘state-of-the-art’ of IT with respect to usability and usefulness for SMEs and discusses how SMEs can overcome the barriers preventing them from adopting existing solutions. The paper then considers management perspectives and the role of maturity models in enhancing and structuring the adoption of data analytics in an organisation. The history of total quality management is reviewed to inform the core aspects of implanting a new paradigm. The paper concludes with recommendations to help SMEs develop their big data capability and enable them to continue as the engines of European industrial and business success. Copyright © 2016 John Wiley & Sons, Ltd.Peer ReviewedPostprint (author's final draft

    CU2CL: A CUDA-to-OpenCL Translator for Multi- and Many-core Architectures

    Get PDF
    The use of graphics processing units (GPUs) in high-performance parallel computing continues to become more prevalent, often as part of a heterogeneous system. For years, CUDA has been the de facto programming environment for nearly all general-purpose GPU (GPGPU) applications. In spite of this, the framework is available only on NVIDIA GPUs, traditionally requiring reimplementation in other frameworks in order to utilize additional multi- or many-core devices. On the other hand, OpenCL provides an open and vendorneutral programming environment and runtime system. With implementations available for CPUs, GPUs, and other types of accelerators, OpenCL therefore holds the promise of a “write once, run anywhere” ecosystem for heterogeneous computing. Given the many similarities between CUDA and OpenCL, manually porting a CUDA application to OpenCL is typically straightforward, albeit tedious and error-prone. In response to this issue, we created CU2CL, an automated CUDA-to- OpenCL source-to-source translator that possesses a novel design and clever reuse of the Clang compiler framework. Currently, the CU2CL translator covers the primary constructs found in CUDA runtime API, and we have successfully translated many applications from the CUDA SDK and Rodinia benchmark suite. The performance of our automatically translated applications via CU2CL is on par with their manually ported countparts

    Building an Emulation Environment for Cyber Security Analyses of Complex Networked Systems

    Full text link
    Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly growing, and the testing and experimentation of cyber defense solutions requires the availability of separate, test environments that best emulate the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, thus enabling the study of cyber defense strategies under real and controllable traffic and attack scenarios. In this paper, we propose a methodology that makes use of a combination of techniques of network and security assessment, and the use of cloud technologies to build an emulation environment with adjustable degree of affinity with respect to actual reference networks or planned systems. As a byproduct, starting from a specific study case, we collected a dataset consisting of complete network traces comprising benign and malicious traffic, which is feature-rich and publicly available
    • 

    corecore