14,410 research outputs found
Activity Recognition using Hierarchical Hidden Markov Models on Streaming Sensor Data
Activity recognition from sensor data deals with various challenges, such as
overlapping activities, activity labeling, and activity detection. Although
each challenge in the field of recognition has great importance, the most
important one refers to online activity recognition. The present study tries to
use online hierarchical hidden Markov model to detect an activity on the stream
of sensor data which can predict the activity in the environment with any
sensor event. The activity recognition samples were labeled by the statistical
features such as the duration of activity. The results of our proposed method
test on two different datasets of smart homes in the real world showed that one
dataset has improved 4% and reached (59%) while the results reached 64.6% for
the other data by using the best methods
Recommended from our members
The LONI QC System: A Semi-Automated, Web-Based and Freely-Available Environment for the Comprehensive Quality Control of Neuroimaging Data.
Quantifying, controlling, and monitoring image quality is an essential prerequisite for ensuring the validity and reproducibility of many types of neuroimaging data analyses. Implementation of quality control (QC) procedures is the key to ensuring that neuroimaging data are of high-quality and their validity in the subsequent analyses. We introduce the QC system of the Laboratory of Neuro Imaging (LONI): a web-based system featuring a workflow for the assessment of various modality and contrast brain imaging data. The design allows users to anonymously upload imaging data to the LONI-QC system. It then computes an exhaustive set of QC metrics which aids users to perform a standardized QC by generating a range of scalar and vector statistics. These procedures are performed in parallel using a large compute cluster. Finally, the system offers an automated QC procedure for structural MRI, which can flag each QC metric as being 'good' or 'bad.' Validation using various sets of data acquired from a single scanner and from multiple sites demonstrated the reproducibility of our QC metrics, and the sensitivity and specificity of the proposed Auto QC to 'bad' quality images in comparison to visual inspection. To the best of our knowledge, LONI-QC is the first online QC system that uniquely supports the variety of functionality where we compute numerous QC metrics and perform visual/automated image QC of multi-contrast and multi-modal brain imaging data. The LONI-QC system has been used to assess the quality of large neuroimaging datasets acquired as part of various multi-site studies such as the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) Study and the Alzheimer's Disease Neuroimaging Initiative (ADNI). LONI-QC's functionality is freely available to users worldwide and its adoption by imaging researchers is likely to contribute substantially to upholding high standards of brain image data quality and to implementing these standards across the neuroimaging community
On the role of pre and post-processing in environmental data mining
The quality of discovered knowledge is highly depending on data quality. Unfortunately real data use to contain noise, uncertainty, errors, redundancies or even irrelevant information. The more complex is the reality to be analyzed, the higher the risk of getting low quality data. Knowledge Discovery from Databases (KDD) offers a global framework to prepare data in the right form to perform correct analyses. On the other hand, the quality of decisions taken upon KDD results, depend not only on the quality of the results themselves, but on the capacity of the system to communicate those results in an understandable form. Environmental systems are particularly complex and environmental users particularly require clarity in their results. In this paper some details about how this can be achieved are provided. The role of the pre and post processing in the whole process of Knowledge Discovery in environmental systems is discussed
A Multi-Factorial Risk Prioritization Framework for Food-Borne Pathogens
To lower the incidence of human food-borne disease, experts and stakeholders have urged the development of a science- and risk-based management system in which food-borne hazards are analyzed and prioritized. A literature review shows that most approaches to risk prioritization developed to date are based on measures of health outcomes and do not systematically account for other factors that may be important to decision making. The Multi-Factorial Risk Prioritization Framework developed here considers four factors that may be important to risk managers: public health, consumer risk perceptions and acceptance, market-level impacts, and social sensitivity. The framework is based on the systematic organization and analysis of data on these multiple factors. The basic building block of the information structure is a three-dimensional cube based on pathogen-food-factor relationships. Each cell of the cube has an information card associated with it and data from the cube can be aggregated along different dimensions. The framework is operationalized in three stages, with each stage adding another dimension to decision-making capacity. The first stage is the information cards themselves that provide systematic information that is not pre-processed or aggregated across factors. The second stage maps the information on the various information cards into cobweb diagrams that create a graphical profile of, for example, a food-pathogen combination with respect to each of the four risk prioritization factors. The third stage is formal multi-criteria decision analysis in which decision makers place explicit values on different criteria in order to develop risk priorities. The process outlined above produces a ‘List A’ of priority food-pathogen combinations according to some aggregate of the four risk prioritization factors. This list is further vetted to produce ‘List B’, which brings in feasibility analysis by ranking those combinations where practical actions that have a significant impact are feasible. Food-pathogen combinations where not enough is known to identify any or few feasible interventions are included in ‘List C’. ‘List C’ highlights areas with significant uncertainty where further research may be needed to enhance the precision of the risk prioritization process. The separation of feasibility and uncertainty issues through the use of ‘Lists A, B, and C’ allows risk managers to focus separately on distinct dimensions of the overall prioritization. The Multi-Factorial Risk Prioritization Framework provides a flexible instrument that compares and contrasts risks along four dimensions. Use of the framework is an iterative process. It can be used to establish priorities across pathogens for a particular food, across foods for a particular pathogen and/or across specific food-pathogen combinations. This report provides a comprehensive conceptual paper that forms the basis for a wider process of consultation and for case studies applying the framework.risk analysis, risk prioritization, food-borne pathogens, benefits and costs
Feasibility of Sensor Technology for Balance Assessment in Home Rehabilitation Settings
The increased use of sensor technology has been crucial in releasing the potential for remote rehabilitation. However, it is vital that human factors, that have potential to affect real-world use, are fully considered before sensors are adopted into remote rehabilitation practice. The smart sensor devices for rehabilitation and connected health (SENDoc) project assesses the human factors associated with sensors for remote rehabilitation of elders in the Northern Periphery of Europe. This article conducts a literature review of human factors and puts forward an objective scoring system to evaluate the feasibility of balance assessment technology for adaption into remote rehabilitation settings. The main factors that must be considered are: Deployment constraints, usability, comfort and accuracy. This article shows that improving accuracy, reliability and validity is the main goal of research focusing on developing novel balance assessment technology. However, other aspects of usability related to human factors such as practicality, comfort and ease of use need further consideration by researchers to help advance the technology to a state where it can be applied in remote rehabilitation settings
Time-Series Embedded Feature Selection Using Deep Learning: Data Mining Electronic Health Records for Novel Biomarkers
As health information technologies continue to advance, routine collection and digitisation of patient health records in the form of electronic health records present as an ideal opportunity for data-mining and exploratory analysis of biomarkers and risk factors indicative of a potentially diverse domain of patient outcomes. Patient records have continually become more widely available through various initiatives enabling open access whilst maintaining critical patient privacy. In spite of such progress, health records remain not widely adopted within the current clinical statistical analysis domain due to challenging issues derived from such “big data”.Deep learning based temporal modelling approaches present an ideal solution to health record challenges through automated self-optimisation of representation learning, able to man-ageably compose the high-dimensional domain of patient records into data representations able to model complex data associations. Such representations can serve to condense and reduce dimensionality to emphasise feature sparsity and importance through novel embedded feature selection approaches. Accordingly, application towards patient records enable complex mod-elling and analysis of the full domain of clinical features to select biomarkers of predictive relevance.Firstly, we propose a novel entropy regularised neural network ensemble able to highlight risk factors associated with hospitalisation risk of individuals with dementia. The application of which, was able to reduce a large domain of unique medical events to a small set of relevant risk factors able to maintain hospitalisation discrimination.Following on, we continue our work on ensemble architecture approaches with a novel cas-cading LSTM ensembles to predict severe sepsis onset within critical patients in an ICU critical care centre. We demonstrate state-of-the-art performance capabilities able to outperform that of current related literature.Finally, we propose a novel embedded feature selection application dubbed 1D convolu-tion feature selection using sparsity regularisation. Said methodology was evaluated on both domains of dementia and sepsis prediction objectives to highlight model capability and generalisability. We further report a selection of potential biomarkers for the aforementioned case study objectives highlighting clinical relevance and potential novelty value for future clinical analysis.Accordingly, we demonstrate the effective capability of embedded feature selection ap-proaches through the application of temporal based deep learning architectures in the discovery of effective biomarkers across a variety of challenging clinical applications
Mechanisms of Cognitive Impairment in Cerebral Small Vessel Disease: Multimodal MRI Results from the St George's Cognition and Neuroimaging in Stroke (SCANS) Study.
Cerebral small vessel disease (SVD) is a common cause of vascular cognitive impairment. A number of disease features can be assessed on MRI including lacunar infarcts, T2 lesion volume, brain atrophy, and cerebral microbleeds. In addition, diffusion tensor imaging (DTI) is sensitive to disruption of white matter ultrastructure, and recently it has been suggested that additional information on the pattern of damage may be obtained from axial diffusivity, a proposed marker of axonal damage, and radial diffusivity, an indicator of demyelination. We determined the contribution of these whole brain MRI markers to cognitive impairment in SVD. Consecutive patients with lacunar stroke and confluent leukoaraiosis were recruited into the ongoing SCANS study of cognitive impairment in SVD (n = 115), and underwent neuropsychological assessment and multimodal MRI. SVD subjects displayed poor performance on tests of executive function and processing speed. In the SVD group brain volume was lower, white matter hyperintensity volume higher and all diffusion characteristics differed significantly from control subjects (n = 50). On multi-predictor analysis independent predictors of executive function in SVD were lacunar infarct count and diffusivity of normal appearing white matter on DTI. Independent predictors of processing speed were lacunar infarct count and brain atrophy. Radial diffusivity was a stronger DTI predictor than axial diffusivity, suggesting ischaemic demyelination, seen neuropathologically in SVD, may be an important predictor of cognitive impairment in SVD. Our study provides information on the mechanism of cognitive impairment in SVD
- …