247 research outputs found

    SpotMe Emergency Location Service

    Get PDF
    This document delves into our disaster relief application that allows people who are helpless due to a natural disaster to find a way out and get help. The purpose of this document is to explain how the application works, but more specifically the design of the application, use cases, and conceptual models. Starting with a brief introduction, this paper will dive into the necessary requirements needed to build an application at this scale while presenting several use cases. To help the reader understand the application at a finer detail, activity diagrams will be shown along with models. Lastly, the document will cover what technologies will need to be used as well as a test plan and risk analysis

    Development of Low Cost Heart Rate Monitoring Device and Classification Technique Using Fuzzy Logics Algorithm

    Get PDF
    Heart as one of necessary organs, has been examined profoundly by the heart rate changes. The heart rate is affected by many factors, such as age, gender, physiological conditions. Hence, better diagnosis can be made if the interpretation of heart rate signal would be automated that eliminates the human error while comprising the influential factors. Subjective readings may lead to imprecise diagnosis. In this project, proposed tool is designed for medical experts that can reliably interpret the heart signal based on age, gender and heart condition. PPG sensor was utilized to sense the heartbeats. Furthermore, the raw signal was transferred through wireless medium using RF Transceivers and Arduino Uno as a microcontroller to the remote base station. This would let end users (physicians/Caregivers) to have a real-time heart rate monitoring without a need of connecting wires from the patient ward/room to the remote station which was designed in MATLAB GUI. The classification of the signal being obtained is achieved through fuzzy logics algorithm inside the MATLAB Fuzzy Logic Toolbox. The cost-effectiveness of the proposed project was another benefits that could be added to an automated heart rate monitoring device

    A New Open Information Extraction System Using Sentence Difficulty Estimation

    Get PDF
    The World Wide Web has a considerable amount of information expressed using natural language. While unstructured text is often difficult for machines to understand, Open Information Extraction (OIE) is a relation-independent extraction paradigm designed to extract assertions directly from massive and heterogeneous corpora. Allocation of low-cost computational resources is a main demand for Open Relation Extraction (ORE) systems. A large number of ORE methods have been proposed recently, covering a wide range of NLP tools, from ``shallow'' (e.g., part-of-speech tagging) to ``deep'' (e.g., semantic role labeling). There is a trade-off between NLP tools depth versus efficiency (computational cost) of ORE systems. This paper describes a novel approach called Sentence Difficulty Estimator for Open Information Extraction (SDE-OIE) for automatic estimation of relation extraction difficulty by developing some difficulty classifiers. These classifiers dedicate the input sentence to an appropriate OIE extractor in order to decrease the overall computational cost. Our evaluations show that an intelligent selection of a proper depth of ORE systems has a significant improvement on the effectiveness and scalability of SDE-OIE. It avoids wasting resources and achieves almost the same performance as its constituent deep extractor in a more reasonable time

    PerSHOP -- A Persian dataset for shopping dialogue systems modeling

    Full text link
    Nowadays, dialogue systems are used in many fields of industry and research. There are successful instances of these systems, such as Apple Siri, Google Assistant, and IBM Watson. Task-oriented dialogue system is a category of these, that are used in specific tasks. They can perform tasks such as booking plane tickets or making restaurant reservations. Shopping is one of the most popular areas on these systems. The bot replaces the human salesperson and interacts with the customers by speaking. To train the models behind the scenes of these systems, annotated data is needed. In this paper, we developed a dataset of dialogues in the Persian language through crowd-sourcing. We annotated these dialogues to train a model. This dataset contains nearly 22k utterances in 15 different domains and 1061 dialogues. This is the largest Persian dataset in this field, which is provided freely so that future researchers can use it. Also, we proposed some baseline models for natural language understanding (NLU) tasks. These models perform two tasks for NLU: intent classification and entity extraction. The F-1 score metric obtained for intent classification is around 91% and for entity extraction is around 93%, which can be a baseline for future research
    • …
    corecore