43,221 research outputs found

    Focus on: New trends, challenges and perspectives on healthcare cognitive computing: from information extraction to healthcare analytics

    Get PDF
    The focus of this special issue is cognitive computing in healthcare, due to the ever-increasing interest it is gaining for both research purposes and clinical applications. Indeed, cognitive computing is a challenging technology in many fields of application (Banavar, 2016) such as, e.g., medicine, education or eco- nomics (Coccoli et al., 2016) especially for the management of huge quantities of information where cognitive computing techniques push applications based on the use of big data (Coccoli et al., 2017). An unprecedented amount of data is made available from a heterogeneous variety of sources and this is true also in the case of health data, which can be exploited in many ways by means of sophisticated cognitive computing solutions and related technologies, such as, e.g., information extraction, natural language processing, and analytics. Also, from the point of view of programming they set challenging issues (see, e.g., Coccoli et al., 2015). In fact, the amount of healthcare that is now available and, potentially useful to care teams, reached 150 Exabytes worldwide and about 80% of this huge volume of data is in an unstructured form, being thus somehow invisible to systems. Hence, it is clear that cognitive computing and data analytics are the two key factors we have for make use – at least partially – of such a big volume of data. This can lead to personalized health solutions and healthcare systems that are more reliable, effective and efficient also re- ducing their expenditures. Healthcare will have a big impact on industry and research. However, this field, which seems to be a new era for our society, requires many scientific endeavours. Just to name a few, you need to create a hybrid and secure cloud to guarantee the security and confidentiality of health data, especially when smartphones or similar devices are used with specific app (see, e.g., Mazurczyk & Caviglione, 2015). Beside the cloud, you also need to consider novel ar- chitectures and data platforms that shall be different from the existing ones,because 90% of health and biomedical data are images and also because 80% of health data in the world is not available on the Web. This special issue wants to review state-of-the-art of issues and solutions of cognitive computing, focusing also on the current challenges and perspecti- ves and includes a heterogeneous collection of papers covering the following topics: information extraction in healthcare applications, semantic analysis in medicine, data analytics in healthcare, machine learning and cognitive com- puting, data architecture for healthcare, data platform for healthcare, hybrid cloud for healthcare

    Research and Education in Computational Science and Engineering

    Get PDF
    Over the past two decades the field of computational science and engineering (CSE) has penetrated both basic and applied research in academia, industry, and laboratories to advance discovery, optimize systems, support decision-makers, and educate the scientific and engineering workforce. Informed by centuries of theory and experiment, CSE performs computational experiments to answer questions that neither theory nor experiment alone is equipped to answer. CSE provides scientists and engineers of all persuasions with algorithmic inventions and software systems that transcend disciplines and scales. Carried on a wave of digital technology, CSE brings the power of parallelism to bear on troves of data. Mathematics-based advanced computing has become a prevalent means of discovery and innovation in essentially all areas of science, engineering, technology, and society; and the CSE community is at the core of this transformation. However, a combination of disruptive developments---including the architectural complexity of extreme-scale computing, the data revolution that engulfs the planet, and the specialization required to follow the applications to new frontiers---is redefining the scope and reach of the CSE endeavor. This report describes the rapid expansion of CSE and the challenges to sustaining its bold advances. The report also presents strategies and directions for CSE research and education for the next decade.Comment: Major revision, to appear in SIAM Revie

    A Tale of Two Data-Intensive Paradigms: Applications, Abstractions, and Architectures

    Full text link
    Scientific problems that depend on processing large amounts of data require overcoming challenges in multiple areas: managing large-scale data distribution, co-placement and scheduling of data with compute resources, and storing and transferring large volumes of data. We analyze the ecosystems of the two prominent paradigms for data-intensive applications, hereafter referred to as the high-performance computing and the Apache-Hadoop paradigm. We propose a basis, common terminology and functional factors upon which to analyze the two approaches of both paradigms. We discuss the concept of "Big Data Ogres" and their facets as means of understanding and characterizing the most common application workloads found across the two paradigms. We then discuss the salient features of the two paradigms, and compare and contrast the two approaches. Specifically, we examine common implementation/approaches of these paradigms, shed light upon the reasons for their current "architecture" and discuss some typical workloads that utilize them. In spite of the significant software distinctions, we believe there is architectural similarity. We discuss the potential integration of different implementations, across the different levels and components. Our comparison progresses from a fully qualitative examination of the two paradigms, to a semi-quantitative methodology. We use a simple and broadly used Ogre (K-means clustering), characterize its performance on a range of representative platforms, covering several implementations from both paradigms. Our experiments provide an insight into the relative strengths of the two paradigms. We propose that the set of Ogres will serve as a benchmark to evaluate the two paradigms along different dimensions.Comment: 8 pages, 2 figure

    How can SMEs benefit from big data? Challenges and a path forward

    Get PDF
    Big data is big news, and large companies in all sectors are making significant advances in their customer relations, product selection and development and consequent profitability through using this valuable commodity. Small and medium enterprises (SMEs) have proved themselves to be slow adopters of the new technology of big data analytics and are in danger of being left behind. In Europe, SMEs are a vital part of the economy, and the challenges they encounter need to be addressed as a matter of urgency. This paper identifies barriers to SME uptake of big data analytics and recognises their complex challenge to all stakeholders, including national and international policy makers, IT, business management and data science communities. The paper proposes a big data maturity model for SMEs as a first step towards an SME roadmap to data analytics. It considers the ‘state-of-the-art’ of IT with respect to usability and usefulness for SMEs and discusses how SMEs can overcome the barriers preventing them from adopting existing solutions. The paper then considers management perspectives and the role of maturity models in enhancing and structuring the adoption of data analytics in an organisation. The history of total quality management is reviewed to inform the core aspects of implanting a new paradigm. The paper concludes with recommendations to help SMEs develop their big data capability and enable them to continue as the engines of European industrial and business success. Copyright © 2016 John Wiley & Sons, Ltd.Peer ReviewedPostprint (author's final draft
    • …
    corecore