12 research outputs found

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Decision rules construction : algorithm based on EAV model

    Get PDF
    In the paper, an approach for decision rules construction is proposed. It is studied from the point of view of the supervised machine learning task, i.e., classification, and from the point of view of knowledge representation. Generated rules provide comparable classification results to the dynamic programming approach for optimization of decision rules relative to length or support. However, the proposed algorithm is based on transformation of decision table into entity– attribute–value (EAV) format. Additionally, standard deviation function for computation of averages’ values of attributes in particular decision classes was introduced. It allows to select from the whole set of attributes only these which provide the highest degree of information about the decision. Construction of decision rules is performed based on idea of partitioning of a decision table into corresponding subtables. In opposite to dynamic programming approach, not all attributes need to be taken into account but only these with the highest values of standard deviation per decision classes. Consequently, the proposed solution is more time efficient because of lower computational complexity. In the framework of experimental results, support and length of decision rules were computed and compared with the values of optimal rules. The classification error for data sets from UCI Machine Learning Repository was also obtained and compared with the ones for dynamic programming approach. Performed experiments show that constructed rules are not far from the optimal ones and classification results are comparable to these obtained in the framework of the dynamic programming extension

    Mobile Health Technologies

    Get PDF
    Mobile Health Technologies, also known as mHealth technologies, have emerged, amongst healthcare providers, as the ultimate Technologies-of-Choice for the 21st century in delivering not only transformative change in healthcare delivery, but also critical health information to different communities of practice in integrated healthcare information systems. mHealth technologies nurture seamless platforms and pragmatic tools for managing pertinent health information across the continuum of different healthcare providers. mHealth technologies commonly utilize mobile medical devices, monitoring and wireless devices, and/or telemedicine in healthcare delivery and health research. Today, mHealth technologies provide opportunities to record and monitor conditions of patients with chronic diseases such as asthma, Chronic Obstructive Pulmonary Diseases (COPD) and diabetes mellitus. The intent of this book is to enlighten readers about the theories and applications of mHealth technologies in the healthcare domain

    Automated Testing: Requirements Propagation via Model Transformation in Embedded Software

    Get PDF
    Testing is the most common activity to validate software systems and plays a key role in the software development process. In general, the software testing phase takes around 40-70% of the effort, time and cost. This area has been well researched over a long period of time. Unfortunately, while many researchers have found methods of reducing time and cost during the testing process, there are still a number of important related issues such as generating test cases from UCM scenarios and validate them need to be researched. As a result, ensuring that an embedded software behaves correctly is non-trivial, especially when testing with limited resources and seeking compliance with safety-critical software standard. It thus becomes imperative to adopt an approach or methodology based on tools and best engineering practices to improve the testing process. This research addresses the problem of testing embedded software with limited resources by the following. First, a reverse-engineering technique is exercised on legacy software tests aims to discover feasible transformation from test layer to test requirement layer. The feasibility of transforming the legacy test cases into an abstract model is shown, along with a forward engineering process to regenerate the test cases in selected test language. Second, a new model-driven testing technique based on different granularity level (MDTGL) to generate test cases is introduced. The new approach uses models in order to manage the complexity of the system under test (SUT). Automatic model transformation is applied to automate test case development which is a tedious, error-prone, and recurrent software development task. Third, the model transformations that automated the development of test cases in the MDTGL methodology are validated in comparison with industrial testing process using embedded software specification. To enable the validation, a set of timed and functional requirement is introduced. Two case studies are run on an embedded system to generate test cases. The effectiveness of two testing approaches are determined and contrasted according to the generation of test cases and the correctness of the generated workflow. Compared to several techniques, our new approach generated useful and effective test cases with much less resources in terms of time and labor work. Finally, to enhance the applicability of MDTGL, the methodology is extended with the creation of a trace model that records traceability links among generated testing artifacts. The traceability links, often mandated by software development standards, enable the support for visualizing traceability, model-based coverage analysis and result evaluation

    Examining association between construction inspection grades and critical defects using data mining and fuzzy logic

    Get PDF
    This paper explores the relations between defect types and quality inspection grades of public construction projects in Taiwan. Altogether, 499 defect types (classified from 17,648 defects) were found after analyzing 990 construction projects from the Public Construction Management Information System of the public construction commission which is a government unit that administers all the public construction. The core of this research includes the following steps. (1) Data mining (DM) was used to derive 57 association rules which altogether contain 30 of the 499 defect types. (2) K-means clustering was used to regroup the 990 projects of two attributes (defect frequency and original grading score of each project) into four new quality classes, so the 990 projects can be more evenly distributed in the four new classes and the correctness and reliability of the following analyses can be ensured. (3) Finally analysis of variance (ANOVA), fuzzy logic, and correlation analysis were used to verify that the aforementioned 30 defect types are the important ones determining inspection grades. Results of this research can help stakeholders of construction projects paying more attention on the root causes of the critical defect types so to dramatically raise their management effectiveness

    Named Entity Recognition and Text Compression

    Get PDF
    Import 13/01/2017In recent years, social networks have become very popular. It is easy for users to share their data using online social networks. Since data on social networks is idiomatic, irregular, brief, and includes acronyms and spelling errors, dealing with such data is more challenging than that of news or formal texts. With the huge volume of posts each day, effective extraction and processing of these data will bring great benefit to information extraction applications. This thesis proposes a method to normalize Vietnamese informal text in social networks. This method has the ability to identify and normalize informal text based on the structure of Vietnamese words, Vietnamese syllable rules, and a trigram model. After normalization, the data will be processed by a named entity recognition (NER) model to identify and classify the named entities in these data. In our NER model, we use six different types of features to recognize named entities categorized in three predefined classes: Person (PER), Location (LOC), and Organization (ORG). When viewing social network data, we found that the size of these data are very large and increase daily. This raises the challenge of how to decrease this size. Due to the size of the data to be normalized, we use a trigram dictionary that is quite big, therefore we also need to decrease its size. To deal with this challenge, in this thesis, we propose three methods to compress text files, especially in Vietnamese text. The first method is a syllable-based method relying on the structure of Vietnamese morphosyllables, consonants, syllables and vowels. The second method is trigram-based Vietnamese text compression based on a trigram dictionary. The last method is based on an n-gram slide window, in which we use five dictionaries for unigrams, bigrams, trigrams, four-grams and five-grams. This method achieves a promising compression ratio of around 90% and can be used for any size of text file.In recent years, social networks have become very popular. It is easy for users to share their data using online social networks. Since data on social networks is idiomatic, irregular, brief, and includes acronyms and spelling errors, dealing with such data is more challenging than that of news or formal texts. With the huge volume of posts each day, effective extraction and processing of these data will bring great benefit to information extraction applications. This thesis proposes a method to normalize Vietnamese informal text in social networks. This method has the ability to identify and normalize informal text based on the structure of Vietnamese words, Vietnamese syllable rules, and a trigram model. After normalization, the data will be processed by a named entity recognition (NER) model to identify and classify the named entities in these data. In our NER model, we use six different types of features to recognize named entities categorized in three predefined classes: Person (PER), Location (LOC), and Organization (ORG). When viewing social network data, we found that the size of these data are very large and increase daily. This raises the challenge of how to decrease this size. Due to the size of the data to be normalized, we use a trigram dictionary that is quite big, therefore we also need to decrease its size. To deal with this challenge, in this thesis, we propose three methods to compress text files, especially in Vietnamese text. The first method is a syllable-based method relying on the structure of Vietnamese morphosyllables, consonants, syllables and vowels. The second method is trigram-based Vietnamese text compression based on a trigram dictionary. The last method is based on an n-gram slide window, in which we use five dictionaries for unigrams, bigrams, trigrams, four-grams and five-grams. This method achieves a promising compression ratio of around 90% and can be used for any size of text file.460 - Katedra informatikyvyhovÄ›

    The Emerging Wearable Solutions in mHealth

    Get PDF
    The marriage of wearable sensors and smartphones have fashioned a foundation for mobile health technologies that enable healthcare to be unimpeded by geographical boundaries. Sweeping efforts are under way to develop a wide variety of smartphone-linked wearable biometric sensors and systems. This chapter reviews recent progress in the field of wearable technologies with a focus on key solutions for fall detection and prevention, Parkinson’s disease assessment and cardiac disease, blood pressure and blood glucose management. In particular, the smartphone-based systems, without any external wearables, are summarized and discussed

    Analysis of Android Device-Based Solutions for Fall Detection

    Get PDF
    Falls are a major cause of health and psychological problems as well as hospitalization costs among older adults. Thus, the investigation on automatic Fall Detection Systems (FDSs) has received special attention from the research community during the last decade. In this area, the widespread popularity, decreasing price, computing capabilities, built-in sensors and multiplicity of wireless interfaces of Android-based devices (especially smartphones) have fostered the adoption of this technology to deploy wearable and inexpensive architectures for fall detection. This paper presents a critical and thorough analysis of those existing fall detection systems that are based on Android devices. The review systematically classifies and compares the proposals of the literature taking into account different criteria such as the system architecture, the employed sensors, the detection algorithm or the response in case of a fall alarms. The study emphasizes the analysis of the evaluation methods that are employed to assess the effectiveness of the detection process. The review reveals the complete lack of a reference framework to validate and compare the proposals. In addition, the study also shows that most research works do not evaluate the actual applicability of the Android devices (with limited battery and computing resources) to fall detection solutions.Ministerio de Economía y Competitividad TEC2013-42711-

    Comparison and Characterization of Android-Based Fall Detection Systems

    Get PDF
    Falls are a foremost source of injuries and hospitalization for seniors. The adoption of automatic fall detection mechanisms can noticeably reduce the response time of the medical staff or caregivers when a fall takes place. Smartphones are being increasingly proposed as wearable, cost-effective and not-intrusive systems for fall detection. The exploitation of smartphones’ potential (and in particular, the Android Operating System) can benefit from the wide implantation, the growing computational capabilities and the diversity of communication interfaces and embedded sensors of these personal devices. After revising the state-of-the-art on this matter, this study develops an experimental testbed to assess the performance of different fall detection algorithms that ground their decisions on the analysis of the inertial data registered by the accelerometer of the smartphone. Results obtained in a real testbed with diverse individuals indicate that the accuracy of the accelerometry-based techniques to identify the falls depends strongly on the fall pattern. The performed tests also show the difficulty to set detection acceleration thresholds that allow achieving a good trade-off between false negatives (falls that remain unnoticed) and false positives (conventional movements that are erroneously classified as falls). In any case, the study of the evolution of the battery drain reveals that the extra power consumption introduced by the Android monitoring applications cannot be neglected when evaluating the autonomy and even the viability of fall detection systems.Ministerio de Economía y Competitividad TEC2009-13763-C02-0

    MolabIS: A Labs Backbone for Storing, Managing and Evaluating Molecular Genetics Data

    Get PDF
    Using paper lab books and spreadsheets to store and manage growing datasets in a file system is inefficient, time consuming and error-prone. Therefore, the overall purpose of this study is to develop an integrated information system for small laboratories conducting Sanger sequencing and microsatellite genotyping projects. To address this, the thesis has investigated the following three issues. First, we proposed a uniform solution using the workflow approach to efficiently collect and store data items in different labs. The outcome is the design of the formalized data framework which is the basic to create a general data model for biodiversity studies. Second, we designed and implemented a web-based information system (MolabIS) allowing lab people to store all original data at each step of their workflow. MolabIS provides essential tools to import, store, organize, search, modify, report and export relevant data. Finally, we conducted a case study to evaluate the performance of MolabIS with typical operations in a production mode. Consequently, we can propose the use of virtual appliance as an efficient solution for the deployment of complex open-source information systems like MolabIS. The major result of this study, along with the publications, is the MolabIS software which is freely released under GPL license at http://www.molabis.org. With its general data model, easy installation process and additional tools for data migration, MolabIS can be used in a wide range of molecular genetics labs
    corecore