68 research outputs found

    How can SMEs benefit from big data? Challenges and a path forward

    Get PDF
    Big data is big news, and large companies in all sectors are making significant advances in their customer relations, product selection and development and consequent profitability through using this valuable commodity. Small and medium enterprises (SMEs) have proved themselves to be slow adopters of the new technology of big data analytics and are in danger of being left behind. In Europe, SMEs are a vital part of the economy, and the challenges they encounter need to be addressed as a matter of urgency. This paper identifies barriers to SME uptake of big data analytics and recognises their complex challenge to all stakeholders, including national and international policy makers, IT, business management and data science communities. The paper proposes a big data maturity model for SMEs as a first step towards an SME roadmap to data analytics. It considers the ‘state-of-the-art’ of IT with respect to usability and usefulness for SMEs and discusses how SMEs can overcome the barriers preventing them from adopting existing solutions. The paper then considers management perspectives and the role of maturity models in enhancing and structuring the adoption of data analytics in an organisation. The history of total quality management is reviewed to inform the core aspects of implanting a new paradigm. The paper concludes with recommendations to help SMEs develop their big data capability and enable them to continue as the engines of European industrial and business success. Copyright © 2016 John Wiley & Sons, Ltd.Peer ReviewedPostprint (author's final draft

    ISBIS 2016: Meeting on Statistics in Business and Industry

    Get PDF
    This Book includes the abstracts of the talks presented at the 2016 International Symposium on Business and Industrial Statistics, held at Barcelona, June 8-10, 2016, hosted at the Universitat Politècnica de Catalunya - Barcelona TECH, by the Department of Statistics and Operations Research. The location of the meeting was at ETSEIB Building (Escola Tecnica Superior d'Enginyeria Industrial) at Avda Diagonal 647. The meeting organizers celebrated the continued success of ISBIS and ENBIS society, and the meeting draw together the international community of statisticians, both academics and industry professionals, who share the goal of making statistics the foundation for decision making in business and related applications. The Scientific Program Committee was constituted by: David Banks, Duke University Amílcar Oliveira, DCeT - Universidade Aberta and CEAUL Teresa A. Oliveira, DCeT - Universidade Aberta and CEAUL Nalini Ravishankar, University of Connecticut Xavier Tort Martorell, Universitat Politécnica de Catalunya, Barcelona TECH Martina Vandebroek, KU Leuven Vincenzo Esposito Vinzi, ESSEC Business Schoo

    Simulated Clinical Trias: some design issues

    Get PDF
    Simulation is widely used to investigate real-world systems in a large number of fields, including clinical trials for drug development, since real trials are costly, frequently fail and may lead to serious side effects. This paper is a survey of the statistical issues arising in these simulated trials. We illustrate the broad applicability of this investigation tool by means of examples selected from the literature. We discuss the aims and the peculiarities of the simulation models used in this context, including a brief mention of the use of metamodels. Of special interest is the topic of the design of the virtual experiments, stressing similarities and differences with the design of real life trials. Since it is important for a computerized model to possess a satisfactory range of accuracy consistent with its intended application, real data provided by physical experiments are used to confirm the simulator : we illustrate validating techniques through a number of examples. We end the paper with some challenging questions on the scientificity, ethics and effectiveness of simulation in the clinical research, and the interesting research problem of how to integrate simulated and physical experiments in a clinical context.Simulation models; pharmacokinetics; pharmacodynamics; model validation; experimental design, ethics. Modelli di simulazione; farmacocinetica; farmacodinamica; validazione; disegno degli esperimenti; etica.

    Simulated Clinical Trias: some design issues

    Get PDF
    Simulation is widely used to investigate real-world systems in a large number of fields, including clinical trials for drug development, since real trials are costly, frequently fail and may lead to serious side effects. This paper is a survey of the statistical issues arising in these simulated trials. We illustrate the broad applicability of this investigation tool by means of examples selected from the literature. We discuss the aims and the peculiarities of the simulation models used in this context, including a brief mention of the use of metamodels. Of special interest is the topic of the design of the virtual experiments, stressing similarities and differences with the design of real life trials. Since it is important for a computerized model to possess a satisfactory range of accuracy consistent with its intended application, real data provided by physical experiments are used to confirm the simulator: we illustrate validating techniques through a number of examples. We end the paper with some challenging questions on the scientificity, ethics and effectiveness of simulation in the clinical research, and the interesting research problem of how to integrate simulated and physical experiments in a clinical context

    Assessing and inferring intra and inter-rater agreement

    Get PDF
    The research work wants to provide a scientific contribution in the field of subjective decision making since the assessment of the consensus, or equivalently the degree of agreement, among a group of raters as well as between more series of evaluations provided by the same rater, on categorical scales is a subject of both scientific and practical interest. Specifically, the research work focuses on the analysis of measures of agreement commonly adopted for assessing the performance (evaluative abilities) of one or more human raters (i.e. a group of raters) providing subjective evaluations about a given set of items/subjects. This topic is common to many contexts, ranging from medical (diagnosis) to engineering (usability test), industrial (visual inspections) or agribusiness (sensory analysis) contexts. In the thesis work, the performance of the agreement indexes under study, belonging to the family of the kappa-type agreement coefficients, have been assessed mainly regarding their inferential aspects, focusing the attention on those scenarios with small sample sizes which do not satisfy the asymptotic conditions required for the applicability of the standard inferential methods. Those scenarios have been poorly investigated in the specialized literature, although there is an evident interest in many experimental contexts. The critical analysis of the specialized literature highlighted two criticisms regarding the adoption of the agreement coefficients: 1) the degree of agreement is generally characterized by a straightforward benchmarking procedure that does not take into account the sampling uncertainty; 2) there is no evidence in the literature of a synthetic index able to assess the performance of a rater and/or of a group of raters in terms of more than one evaluative abilities (for example repeatability and reproducibility). Regarding the former criticism, an inferential benchmarking procedure based on non parametric confidence intervals, build via bootstrap resampling techniques, has been suggested. The statistical properties of the suggested benchmarking procedure have been investigated via a Monte Carlo simulation study by exploring many scenarios defined by varying: level of agreement, sample size and rating scale dimension. The simulation study has been carried out for different agreement coefficients and building different confidence intervals, in order to provide a comparative analysis of their performances. Regarding the latter criticism, instead, has been proposed a novel composite index able to assess the rater abilities of providing both repeatable (i.e. stable over time) and reproducible (i.e. consistent over different rating scales) evaluations. The inferential benchmarking procedure has been extended also to the proposed composite index and their performances have been investigated under different scenarios via a Monte Carlo simulation. The proposed tools have been successfully applied to two real case studies, about the assessment of university teaching quality and the sensory analysis of some food and beverage products, respectively

    Software Defined Network-Based Multi-Access Edge Framework for Vehicular Networks

    Get PDF
    The authors are grateful to the Deanship of Scientific Research at King Saud University for funding this work through Vice Deanship of Scientific Research Chairs: Chair of Pervasive and Mobile Computing.Peer reviewe

    Risk Management Framework 2.0

    Get PDF
    The quantification of risk has received a great deal of attention in recently published literature, and there is an opportunity for the DoD to take advantage of what information is currently available to fundamentally improve on current risk assessment and management processes. The critical elements absent in the current process are the objective assessment of likelihood as part of the whole risk scenario and a visual representation or acknowledgement of uncertainty. A proposed framework would incorporate selected elements of multiple theories and axiomatic approaches in order to: (1) simultaneously examine multiple objectives of the organization, (2) limit bias and subjectivity during the assessment process by converting subjective risk contributors into quantitative values using tools that measure the attack surface and adversarial effort, (3) present likelihood and impact as real-time objective variables that reflect the state of the organization and are grounded on sound mathematical and scientific principles, (4) aggregate and function organization-wide (strategic, operational, and tactical) with maximum transparency, (5) achieve greater representation of the real scenario and strive to model future scenarios, (6) adapt to the preferred granularity, dimensions, and discovery of the decision maker, and (7) improve the decision maker’s ability to select the most optimal alternative by reducing the decision to rational logic. The proposed solution is what I term Risk Management Framework 2.0 , and the expected results of this modernized framework are reduced complexity, improved optimization, and more effective management of risk within the organization. This study introduces a Decision Support System (DSS) concept to aid implementation, maximize transparency and cross-level communication, and keep members operating within the bounds of the proposed framework
    corecore