23 research outputs found

    The Use of Technology in the Subcategorisation of Osteoarthritis: a Delphi Study Approach

    Get PDF
    Objective This UK-wide OATech Network + consensus study utilised a Delphi approach to discern levels of awareness across an expert panel regarding the role of existing and novel technologies in osteoarthritis research. To direct future cross-disciplinary research it aimed to identify which could be adopted to subcategorise patients with osteoarthritis (OA). Design An online questionnaire was formulated based on technologies which might aid OA research and subcategorisation. During a two-day face-to-face meeting concordance of expert opinion was established with surveys (23 questions) before, during and at the end of the meeting (Rounds 1, 2 and 3, respectively). Experts spoke on current evidence for imaging, genomics, epigenomics, proteomics, metabolomics, biomarkers, activity monitoring, clinical engineering and machine learning relating to subcategorisation. For each round of voting, ≥80% votes led to consensus and ≤20% to exclusion of a statement. Results Panel members were unanimous that a combination of novel technological advances have potential to improve OA diagnostics and treatment through subcategorisation, agreeing in Rounds 1 and 2 that epigenetics, genetics, MRI, proteomics, wet biomarkers and machine learning could aid subcategorisation. Expert presentations changed participants’ opinions on the value of metabolomics, activity monitoring and clinical engineering, all reaching consensus in Round 2. X-rays lost consensus between Rounds 1 and 2; clinical X-rays reached consensus in Round 3. Conclusion Consensus identified that 9 of the 11 technologies should be targeted towards OA subcategorisation to address existing OA research technology and knowledge gaps. These novel, rapidly evolving technologies are recommended as a focus for emergent, cross-disciplinary osteoarthritis research programmes

    Big Data Analytics for Wireless and Wired Network Design: A Survey

    Get PDF
    Currently, the world is witnessing a mounting avalanche of data due to the increasing number of mobile network subscribers, Internet websites, and online services. This trend is continuing to develop in a quick and diverse manner in the form of big data. Big data analytics can process large amounts of raw data and extract useful, smaller-sized information, which can be used by different parties to make reliable decisions. In this paper, we conduct a survey on the role that big data analytics can play in the design of data communication networks. Integrating the latest advances that employ big data analytics with the networks’ control/traffic layers might be the best way to build robust data communication networks with refined performance and intelligent features. First, the survey starts with the introduction of the big data basic concepts, framework, and characteristics. Second, we illustrate the main network design cycle employing big data analytics. This cycle represents the umbrella concept that unifies the surveyed topics. Third, there is a detailed review of the current academic and industrial efforts toward network design using big data analytics. Forth, we identify the challenges confronting the utilization of big data analytics in network design. Finally, we highlight several future research directions. To the best of our knowledge, this is the first survey that addresses the use of big data analytics techniques for the design of a broad range of networks

    Big data processing tools: An experimental performance evaluation

    No full text
    Big Data is currently a hot topic of research and development across several business areas mainly due to recent innovations in information and communication technologies. One of the main challenges of Big Data relates to how one should efficiently handle massive volumes of complex data. Due to the notorious complexity of the data that can be collected from multiple sources, usually motivated by increasing data volumes gathered at high velocity, efficient processing mechanisms are needed for data analysis purposes. Motivated by the rapid growth in technology, development of tools, and frameworks for Big Data, there is much discussion about Big Data querying tools and, specifically, those that are more appropriated for specific analytical needs. This paper describes and evaluates the following popular Big Data processing tools: Drill, HAWQ, Hive, Impala, Presto, and Spark. An experimental evaluation using the Transaction Processing Council (TPC-H) benchmark is presented and discussed, highlighting the performance of each tool, according to different workloads and query types. This article is categorized under: Technologies > Computer Architectures for Data Mining Fundamental Concepts of Data and Knowledge > Big Data Mining Technologies > Data Preprocessing Application Areas > Data Mining Software Tools.FCT – Fundação para a Ciência e Tecnologia, Grant/Award Number: UID/CEC/00319/2013; COMPETE, Grant/Award Number: POCI01-0145-FEDER-007043info:eu-repo/semantics/publishedVersio
    corecore