48 research outputs found

    Semantic Smart Homes: Towards Knowledge Rich Assisted Living Environments

    Get PDF
    International audienceThe complexity of the Emergency Supply Chains makes its management very difficult. Hence, we present in this article a comprehensive view of the French emergency supply chain (ESC), we propose an ad hoc relationship model between actors, and a GRAI grid-based model to initiate a new approach for controlling the ESC deficiencies, especially related to decision making. Throughout the article, we discuss the interest of the use of enterprise modelling to model the ESC. We discuss too, the characterization of the different issues related to the steering of the ESC. A literature review based on the GRAI grid model is proposed and discussed too. The GRAI method is used here because it presents the advantage of using the theory of complex systems, and it provides a dynamic model of an organization by focusing on decision-making and decisions communication

    A Logical Framework for Behaviour Reasoning and Assistance in a Smart Home

    Get PDF
    Abstract- Smart Homes (SH) have emerged as a realistic intelligent assistive environment capable of providing assistive living for the elderly and the disabled. Nevertheless, it still remains a challenge to assist the inhabitants of a SH in performing the “right” action(s) at the “right ” time in the “right ” place. To address this challenge, this paper introduces a novel logical framework for cognitive behavioural modelling, reasoning and assistance based on a highly developed logical theory of actions- the Event Calculus. Cognitive models go beyond data-centric behavioural models in that they govern an inhabitant’s behaviour by reasoning about its knowledge, actions and environmental events. In our work we outline the theoretical foundation of such an approach and describe cognitive modelling of SH. We discuss the reasoning capabilities and algorithms of the cognitive SH model and present the details of the various tasks it can support. A system architecture is proposed to illustrate the use of the framework in facilitating assistive living. We demonstrate the perceived effectiveness of the approach through presentation of its operation in the context of a real world daily activity scenario. Index Terms – Event calculus, cognitive modelling

    The effects of 40 Hz low-pass filtering on the spatial QRS-T angle

    Get PDF
    The spatial QRS-T angle (SA) is a vectorcardiographic (VCG) parameter that has been identified as a marker for changes in the ventricular depolarization and repolarization sequence. The SA is defined as the angle subtended by the mean QRS-vector and the mean T- vector of the VCG. The SA is typically obtained from VCG data that is derived from the resting 12-lead electrocardiogram (ECG). Resting 12-lead ECG data is commonly recorded using a low-pass filter with a cutoff frequency of 150 Hz. The ability of the SA to quantify changes in the ventricular depolarization and repolarization sequence make the SA potentially attractive in a number of different 12-lead ECG monitoring applications. However, the 12-lead ECG data that is obtained in such monitoring applications is typically recorded using a low-pass filter cutoff frequency of 40 Hz. The aim of this research was to quantify the differences between the SA computed using 40 Hz low- pass filtered ECG data (SA40) and the SA computed using 150 Hz low-pass filtered ECG data (SA150). We assessed the difference between the SA40 and the SA150 using a study population of 726 subjects. The differences between the SA40 and the SA150 were quantified as systematic error (mean difference) and random error (span of Bland-Altman 95% limits of agreement). The systematic error between the SA40 and the SA150 was found to be -0.126° [95% confidence interval: -0.146° to - 0.107°]. The random error was quantified 1.045° [95% confidence interval: 0.917° to 1.189°]. The findings of this research suggest that it is possible to accurately determine the value of the SA when using 40 Hz low-pass filtered ECG data. This finding indicates that it is possible to record the SA in applications that require the utilization of 40 Hz low-pass ECG monitoring filters

    Automated detection of atrial fibrillation using RR intervals and multivariate-based classification

    Get PDF
    Automated detection of AF from the electrocardiogram (ECG) still remains a challenge. In this study, we investigated two multivariate-based classification techniques, Random Forests (RF) and k-nearest neighbor (k-nn), for improved automated detection of AF from the ECG. We have compiled a new database from ECG data taken from existing sources. R-R intervals were then analyzed using four previously described R-R irregularity measurements: (1) the coefficient of sample entropy (CoSEn), (2) the coefficient of variance (CV), (3) root mean square of the successive differences (RMSSD), and (4) median absolute deviation (MAD). Using outputs from all four R-R irregularity measurements, RF and k-nn models were trained. RF classification improved AF detection over CoSEn with overall specificity of 80.1% vs. 98.3% and positive predictive value of 51.8% vs. 92.1% with a reduction in sensitivity, 97.6% vs. 92.8%. k-nn also improved specificity and PPV over CoSEn; however, the sensitivity of this approach was considerably reduced (68.0%)

    The effects of electrode placement on an automated algorithm for the detection of ST segment changes on the 12-lead ECG

    Get PDF
    In this study we investigate the effect that ECG electrode placement can have on the detection of ST segment changes. BSPMs from 45 subjects undergoing PTCA were analysed (15 left anterior descending, 15 left circumflex and 15 right coronary artery). 12-lead ECG were extracted from BSPMs corresponding with correct precordial electrode positioning and corresponding with simultaneous vertical movement of all of the precordial leads in 5mm increments up to +/-50mm away from the correct position. A computer algorithm was developed based on current guidelines for the detection of STEMI and Non-STEMI. This algorithm was applied to all of the extracted 12-lead ECGs. Median sensitivity and specificity, based upon all baseline versus all peak balloon inflation cases, were calculated for results generated at each electrode position. With the precordial leads positioned correctly the sensitivity and specificity were 51.1% and 91.1% respectively. When all precordial leads were placed 50mm superior to their correct position the sensitivity increased to 57.8% whilst specificity remained unchanged. At 50mm inferior to the correct position the sensitivity and specificity were 46.7% and 88.9% respectively. The results show a variation of more than 10% in sensitivity when the electrodes are moved up to 100mm vertically

    On the derivation of the spatial QRS-T angle from Mason-Likar leads I, II, V2 and V5

    Get PDF
    The spatial QRS-T angle (SA) has been identified as a marker for changes in the ventricular depolarization and repolarization sequence. The determination of the SA requires vectorcardiographic (VCG) data. However, VCG data is seldom recorded in monitoring applications. This is mainly due to the fact that the number and location of the electrodes required for recording the Frank VCG complicate the recording of VCG data in monitoring applications. Alternatively, reduced lead systems (RLS) allow for the derivation of the Frank VCG from a reduced number of electrocardiographic (ECG) leads. Derived Frank VCGs provide a practical means for the determination of the SA in monitoring applications. One widely studied RLS that is used in clinical practice is based upon Mason-Likar leads I, II, V2 and V5 (MLRL). The aim of this research was two-fold. First, to develop a linear ECG lead transformation matrix that allows for the derivation of the Frank VCG from the MLRL system. Second, to assess the accuracy of the MLRL derived SA (MSA). We used ECG data recorded from 545 subjects for the development of the linear ECG lead transformation matrix. The accuracy of the MSA was assessed by analyzing the differences between the MSA and the SA using the ECG data of 181 subjects. The differences between the MSA and the SA were quantified as systematic error (mean difference) and random error (span of Bland-Altman 95% limits of agreement). The systematic error between the MSA and the SA was found to be 9.38° [95% confidence interval: 7.03° to 11.74°]. The random error was quantified as 62.97° [95% confidence interval: 56.55° to 70.95°]

    Selection of optimal recording sites for limited lead body surface potential mapping: A sequential selection based approach

    Get PDF
    BACKGROUND: In this study we propose the development of a new algorithm for selecting optimal recording sites for limited lead body surface potential mapping. The proposed algorithm differs from previously reported methods in that it is based upon a simple and intuitive data driven technique that does not make any presumptions about deterministic characteristics of the data. It uses a forward selection based search technique to find the best combination of electrocardiographic leads. METHODS: The study was conducted using a dataset consisting of body surface potential maps (BSPM) recorded from 116 subjects which included 59 normals and 57 subjects exhibiting evidence of old Myocardial Infarction (MI). The performance of the algorithm was evaluated using spatial RMS voltage error and correlation coefficient to compare original and reconstructed map frames. RESULTS: In all, three configurations of the algorithm were evaluated and it was concluded that there was little difference in the performance of the various configurations. In addition to observing the performance of the selection algorithm, several lead subsets of 32 electrodes as chosen by the various configurations of the algorithm were evaluated. The rationale for choosing this number of recording sites was to allow comparison with a previous study that used a different algorithm, where 32 leads were deemed to provide an acceptable level of reconstruction performance. CONCLUSION: It was observed that although the lead configurations suggested in this study were not identical to that suggested in the previous work, the systems did bear similar characteristics in that recording sites were chosen with greatest density in the precordial region

    Morphology-based detection of premature ventricular contractions

    Get PDF
    Premature ventricular contraction (PVC) is the type of ectopic heartbeat, commonly found in the healthy population and is often considered benign. However, they are reported to adversely affect the accuracy of R-R variability based electrocardiographic (ECG) algorithms. This study proposes a Principal Component Analysis (PCA) based algorithmic approach to detect the PVCs based on their morphology. The eigenvectors were derived from signal window around the R-peak, where signal window for the PVC (wPVC) and that of NSR (wNSR) were set to 0.55 seconds and 0.16 seconds respectively. We used 24 ECG recordings from MIT BIH arrhythmia database as training dataset and the remaining 24 ECG recordings as testing dataset. Using the derived eigenvectors and the Linear regression (LR) analysis; complexes corresponding to the wNSR and wPVC were estimated from training and testing datasets. Four different classification methods were employed to differentiate between wPVS and wNSR, namely, Root mean squared error (RMSE), Pearson product-moment correlation coefficient comparision, Histogram probability distribution and k-Nearest Neighbour (KNN). All four methods were implemented individually to classify the wPVC and wNSR. The performance of each of the classification approach was evaluated by computing sensitivity and specificity. With the sensitivity of 93.45% and specificity of 93.14%, KNN based classification method has shown the best performance. The method proposed in this study allows for an effective differentiation between NSR beats and PVC beats

    Influence of the training set composition on the estimation performance of linear ECG-lead transformations.

    Get PDF
    Linear ECG-lead transformations (LELTs) are used to estimate unrecorded target leads by applying a number of recorded basis leads to a LELT matrix. Such LELT matrices are commonly developed using training datasets that are composed of ECGs that belong to different diagnostic classes (DCs). The aim of our research was to assess the influence of the training set composition on the estimation performance of LELTs that estimate target leads V1, V3, V4 and V6 from basis leads I, II, V2 and V5 of the 12-lead ECG. Our assessment was performed using ECGs from the three DCs left ventricular hypertrophy, right bundle branch block and normal (ECGs without abnormalities). Training sets with different DC compositions were used for the development of LELT matrices. These matrices were used to estimate the target leads of different test sets. The estimation performance of the developed matrices was quantified using root mean square error values calculated between derived and recorded target leads. Our findings indicate that unbalanced training sets can lead to LELTs that show large estimation performance variability across different DCs. Balanced training sets were found to produce LELTs that performed well across multiple DCs. We recommend balanced training sets for the development of LELTs
    corecore