2,835 research outputs found

    Design and optimization of medical information services for decision support

    Get PDF

    Mobile health data: investigating the data used by an mHealth app using different mobile app architectures

    Get PDF
    Mobile Health (mHealth) has come a long way in the last forty years and is still rapidly evolving and presenting many opportunities. The advancements in mobile technology and wireless mobile communication technology contributed to the rapid evolution and development of mHealth. Consequently, this evolution has led to mHealth solutions that are now capable of generating large amounts of data that is synchronised and stored on remote cloud and central servers, ensuring that the data is distributable to healthcare providers and available for analysis and decision making. However, the amount of data used by mHealth apps can contribute significantly to the overall cost of implementing a new or upscaling an existing mHealth solution. The purpose of this research was to determine if the amount of data used by mHealth apps would differ significantly if they were to be implemented using different mobile app architectures. Three mHealth apps using different mobile app architectures were developed and evaluated. The first app was a native app, the second was a standard mobile Web app and the third was a mobile Web app that used Asynchronous JavaScript and XML (AJAX). Experiments using the same data inputs were conducted on the three mHealth apps. The primary objective of the experiments was to determine if there was a significant difference in the amount of data used by different versions of an mHealth app when implemented using different mobile app architectures. The experiment results demonstrated that native apps that are installed and executed on local mobile devices used the least amount of data and were more data efficient than mobile Web apps that executed on mobile Web browsers. It also demonstrated that mobile apps implemented using different mobile app architectures will demonstrate a significant difference in the amount of data used during normal mobile app usage

    SDTDMn0 : a multidimensional distributed data mining framework supporting time series data analysis for critical care research

    Get PDF
    Premature birth is one of the major perinatal health issues across the world. In 2007, the estimated Canadian preterm birth rate was 8.1 % (CIHI, 2009). Recent research has shown that conditions, such as nosocomial infections or apnoeas, exhibit certain variations in the baby's physiological parameters which can indicate the onset of the event before it can be detected by physicians and nurses. Neonatal Intensive Care Units are some of the highest information producing areas in hospitals. The multidimensional and distributed nature of the data further adds another layer of complexity as physiological changes can occur in one data stream or can be cross-correlated between several streams. With the collection and storage of electronic data becoming a global trend, there is an opportunity to analyse the collected data in order to extract meaningful information and improve healthcare. The aforementioned properties of the data motivate the need for a framework that supports analysis and trend detection in a multidimensional and distributed environment

    A distributed architecture for the monitoring and analysis of time series data

    Get PDF
    It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 Ɨ 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domai

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any productā€™s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Computerised physiological trend monitoring in neonatal intensive care

    Get PDF

    Protegen: a web-based protective antigen database and analysis system

    Get PDF
    Protective antigens are specifically targeted by the acquired immune response of the host and are able to induce protection in the host against infectious and non-infectious diseases. Protective antigens play important roles in vaccine development, as biological markers for disease diagnosis, and for analysis of fundamental host immunity against diseases. Protegen is a web-based central database and analysis system that curates, stores and analyzes protective antigens. Basic antigen information and experimental evidence are curated from peer-reviewed articles. More detailed gene/protein information (e.g. DNA and protein sequences, and COG classification) are automatically extracted from existing databases using internally developed scripts. Bioinformatics programs are also applied to compute different antigen features, such as protein weight and pI, and subcellular localizations of bacterial proteins. Presently, 590 protective antigens have been curated against over 100 infectious diseases caused by pathogens and non-infectious diseases (including cancers and allergies). A user-friendly web query and visualization interface is developed for interactive protective antigen search. A customized BLAST sequence similarity search is also developed for analysis of new sequences provided by the users. To support data exchange, the information of protective antigens is stored in the Vaccine Ontology (VO) in OWL format and can also be exported to FASTA and Excel files. Protegen is publically available at http://www.violinet.org/protegen
    • ā€¦
    corecore