9 research outputs found

    In-Network Data Reduction Approach Based On Smart Sensing

    Get PDF
    The rapid advances in wireless communication and sensor technologies facilitate the development of viable mobile-Health applications that boost opportunity for ubiquitous real- time healthcare monitoring without constraining patients' activities. However, remote healthcare monitoring requires continuous sensing for different analog signals which results in generating large volumes of data that needs to be processed, recorded, and transmitted. Thus, developing efficient in-network data reduction techniques is substantial in such applications. In this paper, we propose an in-network approach for data reduction, which is based on fuzzy formal concept analysis. The goal is to reduce the amount of data that is transmitted, by keeping the minimal-representative data for each class of patients. Using such an approach, the sender can effectively reconfigure its transmission settings by varying the target precision level while maintaining the required application classification accuracy. Our results show the excellent performance of the proposed scheme in terms of data reduction gain and classification accuracy, and the advantages that it exhibits with respect to state-of-the-art techniques.Scopu

    Edge-based Compression and Classification for Smart Healthcare Systems: Concept, Implementation and Evaluation

    Get PDF
    Smart healthcare systems require recording, transmitting and processing large volumes of multimodal medical data generated from different types of sensors and medical devices, which is challenging and may turn some of the remote health monitoring applications impractical. Moving computational intelligence to the net- work edge is a promising approach for providing efficient and convenient ways for continuous-remote monitoring. Implementing efficient edge-based classification and data reduction techniques are of paramount importance to enable smart health- care systems with efficient real-time and cost-effective remote monitoring. Thus, we present our vision of leveraging edge computing to monitor, process, and make au- tonomous decisions for smart health applications. In particular, we present and im- plement an accurate and lightweight classification mechanism that, leveraging some time-domain features extracted from the vital signs, allows for a reliable seizures detection at the network edge with precise classification accuracy and low com- putational requirement. We then propose and implement a selective data transfer scheme, which opts for the most convenient way for data transmission depending on the detected patient’s conditions. In addition to that, we propose a reliable energy-efficient emergency notification system for epileptic seizure detection, based on conceptual learning and fuzzy classification. Our experimental results assess the performance of the proposed system in terms of data reduction, classification accuracy, battery lifetime, and transmission delay. We show the effectiveness of our system and its ability to outperform conventional remote monitoring systems that ignore data processing at the edge by: (i) achieving 98.3% classification accuracy for seizures detection, (ii) extending battery lifetime by 60%, and (iii) decreasing average transmission delay by 90%

    Inconsistencies Detection In Islamic Texts Of Law Interpretations ["fatawas"]

    Get PDF
    Islamic web content offers a very convenient way for people to learn more about Islam religion and the correct practices. For instance, via these web sites they could ask for fatwas (Islamic advisory opinion) with more facilities and serenity. Regarding the sensitivity of the subject, large communities of researchers are working on the evaluation of these web sites according to several criteria. In particular there is a huge effort to check the consistency of the content with respect to the Islamic shariaa (or Islamic law). In this work we are proposing a semiautomatic approach for evaluating the web sites Islamic content, in terms of inconsistency detection, composed of the following steps: (i) Domain selection and definition: It consists of identifying the most relevant named entities related to the selected domain as well as their corresponding values or keywords (NEV). At that stage, we have started building the Fatwas ontology by analyzed around 100 fatwas extracted from the online system. (ii) Formal representation of the Islamic content: It consists of representing the content as formal context relating fatwas to NEV. Here, each named entity is split into different attributes in the database where each attribute is associated to a possible instantiation of the named entity. (iii) Rules extraction: by applying the ConImp tools, we extract a set of implications (or rules) reflecting cause-effect relations between NEV. As an extended option aiming to provide more precise analysis, we have proposed the inclusion of negative attributes. For example for word "licit", we may associate "not licit" or "forbidden", for word "recommended" we associated "not recommended", etc. At that stage by using an extension of Galois Connection we are able to find different logical associations in a minimal way by using the same tool ConImp. (iv) Conceptual reasoning: the objective is to detect a possibly inconsistency between the rules and evaluate their relevance. Each rule is mapped to a binary table in a relational database model. By joining obtained tables we are able to detect inconsistencies. We may also check if a new law is not contradicting existing set of laws by mapping the law into a logical expression. By creating a new table corresponding to its negation we have been able to prove automatically its consistencies as soon as we obtain an empty join of the total set of joins. This preliminary study showed that the logical representation of fatwas gives promising results in detecting inconsistencies within fatwa ontology. Future work includes using automatic named entity extraction and automatic transformation of law into a formatted database; we should be able to build a global system for inconsistencies detection for the domain.qscienc

    A New Structural View Of The Holy Book Based On Specific Words: Towards Unique Chapters (surat) And Sentences (ayat) Characterization In The Quran

    Get PDF
    In the context of web Islamic data analysis and authentication an important task is to be able to authenticate the holy book if published in the net. For that purpose, in order to detect texts contained in the holy book, it seems obvious to first characterize words which are specific to existing chapters (i.e. "Sourat") and words characterizing each sentence in any chapter (i.e. "Aya"). In this current research, we have first mapped the text of the Quran to a binary context R linking each chapter to all words contained in it, and by calculating the fringe relation F of R, we have been able to discover in a very short time all specific words in each chapter of the holy book. By applying the same approach we have found all specific words of each sentence (i.e. "Aya") in the same chapter whenever it is possible. We have found that almost all sentences in the same chapter have one or many specific words. Only sentences repeated in the same chapter or those sentences included in each other might not have specific words. Observation of words simultaneously specific to a chapter in the holy book and to the sentence in the same chapter gave us the idea for characterizing all specific sentences in each chapter with respect to the whole Quran. We found that for 42 chapters all specific words of a chapter are also specific of some sentence in the same chapter. Such specific words might be used to detect in a shorter time website containing some part of the Quran and therefore should help for checking their authenticity. As a matter of fact by goggling only two or three specific words of a chapter, we observed that search results are directly related to the corresponding chapter in the Quran. Al results have been obtained for Arabic texts with or without vowels. Utilization of adequate data structures and threads enabled us to have efficient software written in Java language. The present tool is directly useful for the recognition of different texts in any domain. In the context of our current project, we project to use the same methods to characterize Islamic books in general.qscienc

    Using fringes for minimal conceptual decomposition of binary contexts

    No full text
    International audienceExtracting knowledge from huge data in a reasonable time is still a challenging problem. Most real data (structured or not) can be mapped to an equivalent binary context, with or without using a scaling method, as for extracting associations between words in a text, or in machine learning systems. In this paper, our objective is to find a minimal coverage of a relation R{\mathcal R} with formal concepts. The problem is known to be NP-complete.1 In this paper, we exploit a particular difunctional relation embedded in any binary relation R{\mathcal R}, the fringe of R{\mathcal R}, to find an approximate conceptual coverage of R{\mathcal R}. We use formal properties of fringes to find better algorithms calculating the minimal rectangular coverage of binary relation. Here, a formal context is considered as a binary relation. By exploiting some background on relational algebra in the present work, we merge some results of Belohlavek and Vichodyl,2 using formal concept analysis with previous results obtained by Kcherif et al.3 using relational algebra. We finally propose decomposition algorithms based on the relational formalization and fringe relation
    corecore