23 research outputs found

    A Platform for the Analysis of Qualitative and Quantitative Data about the Built Environment and its Users

    Get PDF
    There are many scenarios in which it is necessary to collect data from multiple sources in order to evaluate a system, including the collection of both quantitative data - from sensors and smart devices - and qualitative data - such as observations and interview results. However, there are currently very few systems that enable both of these data types to be combined in such a way that they can be analysed side-by-side. This paper describes an end-to-end system for the collection, analysis, storage and visualisation of qualitative and quantitative data, developed using the e-Science Central cloud analytics platform. We describe the experience of developing the system, based on a case study that involved collecting data about the built environment and its users. In this case study, data is collected from older adults living in residential care. Sensors were placed throughout the care home and smart devices were issued to the residents. This sensor data is uploaded to the analytics platform and the processed results are stored in a data warehouse, where it is integrated with qualitative data collected by healthcare and architecture researchers. Visualisations are also presented which were intended to allow the data to be explored and for potential correlations between the quantitative and qualitative data to be investigated

    Cloud Computing for Chemical Activity Prediction

    Get PDF
    Abstract-This paper describes how cloud computing has been used to reduce the time taken to generate chemical activity models from years to weeks. Chemists use Quantitative Structure-Activity Relationship (QSAR) models to predict the activity of molecules. Existing Discovery Bus software builds these models automatically from datasets containing known molecular activities, using a "panel of experts" algorithm. Newly available datasets offer the prospect of generating a large number of significantly better models, but the Discovery Bus would have taken over 5 years to compute them. Fortunately, we show that the "panel of experts" algorithm is well-matched to clouds. In the paper we describe the design of a scalable, Windows Azure based infrastructure for the panel of experts pattern. We present the results of a run in which up to 100 Azure nodes were used to generate results from the new datasets in 3 weeks

    Supporting NGS pipelines in the cloud

    Full text link
    [EN] Cloud4Science is a research activity funded by Microsoft that develops a unique online platform providing cloud services, datasets, tools, documentations, tutorial and best practices to meet the needs of researchers across the globe in terms of storing and managing datasets. Cloud4Science initially focuses on dedicated services for the bioinformatics community. Its ultimate goal is to support a wide range of scientific communities as the natural first choice for scientific data curation, analysis andThe authors want to thank Microsoft and the cloud4Science project for funding this research activity.Blanquer Espert, I.; Brasche, G.; Cala, J.; Gagliardi, F.; Gannon, D.; Hiden, H.; Soncu, H.... (2013). Supporting NGS pipelines in the cloud. EMBnet Journal. 19(Supplement A):14-16. doi:10.14806/ej.19.A.625S141619Supplement

    Mobility recorded by wearable devices and gold standards: the Mobilise-D procedure for data standardization

    Get PDF
    Wearable devices are used in movement analysis and physical activity research to extract clinically relevant information about an individual's mobility. Still, heterogeneity in protocols, sensor characteristics, data formats, and gold standards represent a barrier for data sharing, reproducibility, and external validation. In this study, we aim at providing an example of how movement data (from the real-world and the laboratory) recorded from different wearables and gold standard technologies can be organized, integrated, and stored. We leveraged on our experience from a large multi-centric study (Mobilise-D) to provide guidelines that can prove useful to access, understand, and re-use the data that will be made available from the study. These guidelines highlight the encountered challenges and the adopted solutions with the final aim of supporting standardization and integration of data in other studies and, in turn, to increase and facilitate comparison of data recorded in the scientific community. We also provide samples of standardized data, so that both the structure of the data and the procedure can be easily understood and reproduced

    Technical validation of real-world monitoring of gait: a multicentric observational study

    Get PDF
    Introduction: Existing mobility endpoints based on functional performance, physical assessments and patient self-reporting are often affected by lack of sensitivity, limiting their utility in clinical practice. Wearable devices including inertial measurement units (IMUs) can overcome these limitations by quantifying digital mobility outcomes (DMOs) both during supervised structured assessments and in real-world conditions. The validity of IMU-based methods in the real- world, however, is still limited in patient populations. Rigorous validation procedures should cover the device metrological verification, the validation of the algorithms for the DMOs computation specifically for the population of interest and in daily life situations, and the users’ perspective on the device. Methods and analysis: This protocol was designed to establish the technical validity and patient acceptability of the approach used to quantify digital mobility in the real world by Mobilise-D, a consortium funded by the European Union (EU) as part of the Innovative Medicine Initiative, aiming at fostering regulatory approval and clinical adoption of DMOs. After defining the procedures for the metrological verification of an IMU-based device, the experimental procedures for the validation of algorithms used to calculate the DMOs are presented. These include laboratory and real-world assessment in 120 participants from five groups: healthy older adults; chronic obstructive pulmonary disease, Parkinson’s disease, multiple sclerosis, proximal femoral fracture and congestive heart failure. DMOs extracted from the monitoring device will be compared with those from different reference systems, chosen according to the contexts of observation. Questionnaires and interviews will evaluate the users’ perspective on the deployed technology and relevance of the mobility assessment. Ethics and dissemination: The study has been granted ethics approval by the centre’s committees (London—Bloomsbury Research Ethics committee; Helsinki Committee, Tel Aviv Sourasky Medical Centre; Medical Faculties of The University of Tübingen and of the University of Kiel). Data and algorithms will be made publicly available

    Accelerometer-based gait assessment: Pragmatic deployment on an international scale

    Get PDF
    Gait is emerging as a powerful tool to detect early disease and monitor progression across a number of pathologies. Typically quantitative gait assessment has been limited to specialised laboratory facilities. However, measuring gait in home and community settings may provide a more accurate reflection of gait performance because: (1) it will not be confounded by attention which may be heightened during formal testing; and (2) it allows performance to be captured over time. This work addresses the feasibility and challenges of measuring gait characteristics with a single accelerometer based wearable device during free-living activity. Moreover, it describes the current methodological and statistical processes required to quantify those sensitive surrogate markers for ageing and pathology. A unified framework for large scale analysis is proposed. We present data and workflows from healthy older adults and those with Parkinson's disease (PD) while presenting current algorithms and scope within modern pervasive healthcare. Our findings suggested that free-living conditions heighten between group differences showing greater sensitivity to PD, and provided encouraging results to support the use of the suggested framework for large clinical application

    Achieving reproducibility by combining provenance with service and workflow versioning

    No full text
    Capturing and exploiting provenance information is considered to be important across a range of scientific, medical, commercial and Web applications [1, 2, 3, 4], including recent trends towards publishing provenance-rich, executable papers [5]. This article shows how the range of useful questions that provenance can answer is greatly increased when it is encapsulated into a system that can store and execute both current and old versions of workflows and services. e-Science Central provides a scalable, secure cloud platform for application developers. They can use it to upload data - for storage on the cloud - and services, which can be written in a variety of languages. These services can then be combined through workflows which are enacted in the cloud to compute over the data. When a workflow runs, a complete provenance trace is recorded. This paper shows how this provenance trace, used in conjunction with the ability to execute old versions of services and workflows (rather than just the latest versions) can provide useful information that would otherwise not be possible, including the key ability to reproduce experiments and to compare the effects of old and new versions of services on computations.</p

    Achieving reproducibility by combining provenance with service and workflow versioning

    No full text
    Capturing and exploiting provenance information is considered to be important across a range of scientific, medical, commercial and Web applications [1, 2, 3, 4], including recent trends towards publishing provenance-rich, executable papers [5]. This article shows how the range of useful questions that provenance can answer is greatly increased when it is encapsulated into a system that can store and execute both current and old versions of workflows and services. e-Science Central provides a scalable, secure cloud platform for application developers. They can use it to upload data - for storage on the cloud - and services, which can be written in a variety of languages. These services can then be combined through workflows which are enacted in the cloud to compute over the data. When a workflow runs, a complete provenance trace is recorded. This paper shows how this provenance trace, used in conjunction with the ability to execute old versions of services and workflows (rather than just the latest versions) can provide useful information that would otherwise not be possible, including the key ability to reproduce experiments and to compare the effects of old and new versions of services on computations.</p

    Provenance and data differencing for workflow reproducibility analysis

    No full text
    One of the foundations of science is that researchers must publish the methodology used to achieve their results so that others can attempt to reproduce them. This has the added benefit of allowing methods to be adopted and adapted for other purposes. In the field of e-Science, services - often choreographed through workflow, process data to generate results. The reproduction of results is often not straightforward as the computational objects may not be made available or may have been updated since the results were generated. For example, services are often updated to fix bugs or improve algorithms. This paper addresses these problems in three ways. Firstly, it introduces a new framework to clarify the range of meanings of 'reproducibility'. Secondly, it describes a new algorithm, PDIFF, that uses a comparison of workflow provenance traces to determine whether an experiment has been reproduced; the main innovation is that if this is not the case then the specific point(s) of divergence are identified through graph analysis, assisting any researcher wishing to understand those differences. One key feature is support for user-defined, semantic data comparison operators. Finally, the paper describes an implementation of PDIFF that leverages the power of the e-Science Central platform that enacts workflows in the cloud. As well as automatically generating a provenance trace for consumption by PDIFF, the platform supports the storage and reuse of old versions of workflows, data and services; the paper shows how this can be powerfully exploited to achieve reproduction and reuse.</p
    corecore