55 research outputs found
Falls Prediction in Care Homes Using Mobile App Data Collection
Falls are one of the leading causes of unintentional injury related deaths in older adults. Although, falls among elderly is a well documented phenomena; falls of care homes’ residents was under-researched, mainly due to the lack of documented data. In this study, we use data from over 1,769 care homes and 68,200 residents across the UK, which is based on carers who routinely documented the residents’ activities, using the Mobile Care Monitoring mobile app over three years. This study focuses on predicting the first fall of elderly living in care homes a week ahead. We intend to predict continuously based on a time window of the last weeks. Due to the intrinsic longitudinal nature of the data and its heterogeneity, we employ the use of Temporal Abstraction and Time Intervals Related Patterns discovery, which are used as features for classification. We had designed an experiment that reflects real-life conditions to evaluate the framework. Using four weeks of observation time window performed best
Malware Detection using Machine Learning and Deep Learning
Research shows that over the last decade, malware has been growing
exponentially, causing substantial financial losses to various organizations.
Different anti-malware companies have been proposing solutions to defend
attacks from these malware. The velocity, volume, and the complexity of malware
are posing new challenges to the anti-malware community. Current
state-of-the-art research shows that recently, researchers and anti-virus
organizations started applying machine learning and deep learning methods for
malware analysis and detection. We have used opcode frequency as a feature
vector and applied unsupervised learning in addition to supervised learning for
malware classification. The focus of this tutorial is to present our work on
detecting malware with 1) various machine learning algorithms and 2) deep
learning models. Our results show that the Random Forest outperforms Deep
Neural Network with opcode frequency as a feature. Also in feature reduction,
Deep Auto-Encoders are overkill for the dataset, and elementary function like
Variance Threshold perform better than others. In addition to the proposed
methodologies, we will also discuss the additional issues and the unique
challenges in the domain, open research problems, limitations, and future
directions.Comment: 11 Pages and 3 Figure
FIBS: A Generic Framework for Classifying Interval-based Temporal Sequences
We study the problem of classifying interval-based temporal sequences
(IBTSs). Since common classification algorithms cannot be directly applied to
IBTSs, the main challenge is to define a set of features that effectively
represents the data such that classifiers can be applied. Most prior work
utilizes frequent pattern mining to define a feature set based on discovered
patterns. However, frequent pattern mining is computationally expensive and
often discovers many irrelevant patterns. To address this shortcoming, we
propose the FIBS framework for classifying IBTSs. FIBS extracts features
relevant to classification from IBTSs based on relative frequency and temporal
relations. To avoid selecting irrelevant features, a filter-based selection
strategy is incorporated into FIBS. Our empirical evaluation on eight
real-world datasets demonstrates the effectiveness of our methods in practice.
The results provide evidence that FIBS effectively represents IBTSs for
classification algorithms, which contributes to similar or significantly better
accuracy compared to state-of-the-art competitors. It also suggests that the
feature selection strategy is beneficial to FIBS's performance.Comment: In: Big Data Analytics and Knowledge Discovery. DaWaK 2020. Springer,
Cha
Allen's Interval Algebra Makes the Difference
Allen's Interval Algebra constitutes a framework for reasoning about temporal
information in a qualitative manner. In particular, it uses intervals, i.e.,
pairs of endpoints, on the timeline to represent entities corresponding to
actions, events, or tasks, and binary relations such as precedes and overlaps
to encode the possible configurations between those entities. Allen's calculus
has found its way in many academic and industrial applications that involve,
most commonly, planning and scheduling, temporal databases, and healthcare. In
this paper, we present a novel encoding of Interval Algebra using answer-set
programming (ASP) extended by difference constraints, i.e., the fragment
abbreviated as ASP(DL), and demonstrate its performance via a preliminary
experimental evaluation. Although our ASP encoding is presented in the case of
Allen's calculus for the sake of clarity, we suggest that analogous encodings
can be devised for other point-based calculi, too.Comment: Part of DECLARE 19 proceeding
Information Discovery on Electronic Health Records Using Authority Flow Techniques
<p>Abstract</p> <p>Background</p> <p>As the use of electronic health records (EHRs) becomes more widespread, so does the need to search and provide effective information discovery within them. Querying by keyword has emerged as one of the most effective paradigms for searching. Most work in this area is based on traditional Information Retrieval (IR) techniques, where each document is compared individually against the query. We compare the effectiveness of two fundamentally different techniques for keyword search of EHRs.</p> <p>Methods</p> <p>We built two ranking systems. The traditional BM25 system exploits the EHRs' content without regard to association among entities within. The Clinical ObjectRank (CO) system exploits the entities' associations in EHRs using an authority-flow algorithm to discover the most relevant entities. BM25 and CO were deployed on an EHR dataset of the cardiovascular division of Miami Children's Hospital. Using sequences of keywords as queries, sensitivity and specificity were measured by two physicians for a set of 11 queries related to congenital cardiac disease.</p> <p>Results</p> <p>Our pilot evaluation showed that CO outperforms BM25 in terms of sensitivity (65% vs. 38%) by 71% on average, while maintaining the specificity (64% vs. 61%). The evaluation was done by two physicians.</p> <p>Conclusions</p> <p>Authority-flow techniques can greatly improve the detection of relevant information in EHRs and hence deserve further study.</p
A Scalable Architecture for Incremental Specification and Maintenance of Procedural and Declarative Clinical Decision-Support Knowledge
Clinical guidelines have been shown to improve the quality of medical care and to reduce its costs. However, most guidelines exist in a free-text representation and, without automation, are not sufficiently accessible to clinicians at the point of care. A prerequisite for automated guideline application is a machine-comprehensible representation of the guidelines. In this study, we designed and implemented a scalable architecture to support medical experts and knowledge engineers in specifying and maintaining the procedural and declarative aspects of clinical guideline knowledge, resulting in a machine comprehensible representation. The new framework significantly extends our previous work on the Digital electronic Guidelines Library (DeGeL) The current study designed and implemented a graphical framework for specification of declarative and procedural clinical knowledge, Gesher. We performed three different experiments to evaluate the functionality and usability of the major aspects of the new framework: Specification of procedural clinical knowledge, specification of declarative clinical knowledge, and exploration of a given clinical guideline. The subjects included clinicians and knowledge engineers (overall, 27 participants). The evaluations indicated high levels of completeness and correctness of the guideline specification process by both the clinicians and the knowledge engineers, although the best results, in the case of declarative-knowledge specification, were achieved by teams including a clinician and a knowledge engineer. The usability scores were high as well, although the clinicians’ assessment was significantly lower than the assessment of the knowledge engineers
- …