16 research outputs found

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Fine spatial scale modelling of Trentino past forest landscape and future change scenarios to study ecosystem services through the years

    Get PDF
    Ciolli, MarcoCantiani, Maria Giulia1openLandscape in Europe has dramatically changed in the last decades. This has been especially true for Alpine regions, where the progressive urbanization of the valleys has been accom- panied by the abandonment of smaller villages and areas at higher elevation. This trend has been clearly observable in the Provincia Autonoma di Trento (PAT) region in the Italian Alps. The impact has been substantial for many rural areas, with the progressive shrinking of meadows and pastures due to the forest natural recolonization. These modifications of the landscape affect biodiversity, social and cultural dynamics, including landscape perception and some ecosystem services. Literature review showed that this topic has been addressed by several authors across the Alps, but their researches are limited in space coverage, spatial resolution and time span. This thesis aims to create a comprehensive dataset of historical maps and multitemporal orthophotos in the area of PAT to perform data analysis to identify the changes in forest and open areas, being an evaluation of how these changes affected land- scape structure and ecosystems, create a future change scenario for a test area and highlight some major changes in ecosystem services through time. In this study a high resolution dataset of maps covering the whole PAT area for over a century was developed. The earlier representation of the PAT territory which contained reliable data about forest coverage was considered is the Historic Cadastral maps of the 1859. These maps in fact systematically and accurately represented the land use of each parcel in the Habsburg Empire, included the PAT. Then, the Italian Kingdom Forest Maps, was the next important source of information about the forest coverage after World War I, before coming to the most recent datasets of the greyscale images of 1954, 1994 and the multiband images of 2006 and 2015. The purpose of the dataset development is twofold: to create a series of maps describing the forest and open areas coverage in the last 160 years for the whole PAT on one hand and to setup and test procedures to extract the relevant information from imagery and historical maps on the other. The datasets were archived, processed and analysed using the Free and Open Source Software (FOSS) GIS GRASS, QGIS and R. The goal set by this work was achieved by a remote sensed analysis of said maps and aerial imagery. A series of procedures were applied to extract a land use map, with the forest categories reaching a level of detail rarely achieved for a study area of such an extension (6200 km2 ). The resolution of the original maps is in fact at a meter level, whereas the coarser resampling adopted is 10mx10m pixels. The great variety and size of the input data required the development, along the main part of the research, of a series of new tools for automatizing the analysis of the aerial imagery, to reduce the user intervention. New tools for historic map classification were as well developed, for eliminating from the resulting maps of land use from symbols (e.g.: signs), thus enhancing the results. Once the multitemporal forest maps were obtained, the second phase of the current work was a qualitative and quantitative assessment of the forest coverage and how it changed. This was performed by the evaluation of a number of landscape metrics, indexes used to quantify the compaction or the rarefaction of the forest areas. A recurring issue in the current Literature on the topic of landscape metrics was identified along their analysis in the current work, that was extensively studied. This highlighted the importance of specifying some parameters in the most used landscape fragmentation analy- sis software to make the results of different studies properly comparable. Within this analysis a set of data coming from other maps were used to characterize the process of afforestation in PAT, such as the potential forest maps, which were used to quantify the area of potential forest which were actually afforested through the years, the Digital Ele- vation Model, which was used to quantify the changes in forest area at a different ranges of altitude, and finally the forest class map, which was used to estimate how afforestation has affected each single forest type. The output forest maps were used to analyse and estimate some ecosystem services, in par- ticular the protection from soil erosion, the changes in biodiversity and the landscape of the forests. Finally, a procedure for the analysis of future changes scenarios was set up to study how afforestation will proceed in absence of external factors in a protected area of PAT. The pro- cedure was developed using Agent Based Models, which considers trees as thinking agents, able to choose where to expand the forest area. The first part of the results achieved consists in a temporal series of maps representing the situation of the forest in each year of the considered dataset. The analysis of these maps suggests a trend of afforestation across the PAT territory. The forest maps were then reclassi- fied by altitude ranges and forest types to show how the afforestation proceeded at different altitudes and forest types. The results showed that forest expansion acted homogeneously through different altitude and forest types. The analysis of a selected set of landscape met- rics showed a progressive compaction of the forests at the expenses of the open areas, in each altitude range and for each forest type. This generated on one hand a benefit for all those ecosystem services linked to a high forest cover, while reduced ecotonal habitats and affected biodiversity distribution and quality. Finally the ABM procedure resulted in a set of maps representing a possible evolution of the forest in an area of PAT, which represented a similar situation respect to other simulations developed using different models in the same area. A second part of the result achieved in the current work consisted in new open source tools for image analysis developed for achieving the results showed, but with a potentially wider field of application, along with new procedure for the evaluation of the image classification. The current work fulfilled its aims, while providing in the meantime new tools and enhance- ment of existing tools for remote sensing and leaving as heritage a large dataset that will be used to deepen he knowledge of the territory of PAT, and, more widely to study emerging pattern in afforestation in an alpine environment.openGobbi, S

    Mobile Health Technologies

    Get PDF
    Mobile Health Technologies, also known as mHealth technologies, have emerged, amongst healthcare providers, as the ultimate Technologies-of-Choice for the 21st century in delivering not only transformative change in healthcare delivery, but also critical health information to different communities of practice in integrated healthcare information systems. mHealth technologies nurture seamless platforms and pragmatic tools for managing pertinent health information across the continuum of different healthcare providers. mHealth technologies commonly utilize mobile medical devices, monitoring and wireless devices, and/or telemedicine in healthcare delivery and health research. Today, mHealth technologies provide opportunities to record and monitor conditions of patients with chronic diseases such as asthma, Chronic Obstructive Pulmonary Diseases (COPD) and diabetes mellitus. The intent of this book is to enlighten readers about the theories and applications of mHealth technologies in the healthcare domain

    Automated Testing: Requirements Propagation via Model Transformation in Embedded Software

    Get PDF
    Testing is the most common activity to validate software systems and plays a key role in the software development process. In general, the software testing phase takes around 40-70% of the effort, time and cost. This area has been well researched over a long period of time. Unfortunately, while many researchers have found methods of reducing time and cost during the testing process, there are still a number of important related issues such as generating test cases from UCM scenarios and validate them need to be researched. As a result, ensuring that an embedded software behaves correctly is non-trivial, especially when testing with limited resources and seeking compliance with safety-critical software standard. It thus becomes imperative to adopt an approach or methodology based on tools and best engineering practices to improve the testing process. This research addresses the problem of testing embedded software with limited resources by the following. First, a reverse-engineering technique is exercised on legacy software tests aims to discover feasible transformation from test layer to test requirement layer. The feasibility of transforming the legacy test cases into an abstract model is shown, along with a forward engineering process to regenerate the test cases in selected test language. Second, a new model-driven testing technique based on different granularity level (MDTGL) to generate test cases is introduced. The new approach uses models in order to manage the complexity of the system under test (SUT). Automatic model transformation is applied to automate test case development which is a tedious, error-prone, and recurrent software development task. Third, the model transformations that automated the development of test cases in the MDTGL methodology are validated in comparison with industrial testing process using embedded software specification. To enable the validation, a set of timed and functional requirement is introduced. Two case studies are run on an embedded system to generate test cases. The effectiveness of two testing approaches are determined and contrasted according to the generation of test cases and the correctness of the generated workflow. Compared to several techniques, our new approach generated useful and effective test cases with much less resources in terms of time and labor work. Finally, to enhance the applicability of MDTGL, the methodology is extended with the creation of a trace model that records traceability links among generated testing artifacts. The traceability links, often mandated by software development standards, enable the support for visualizing traceability, model-based coverage analysis and result evaluation

    Examining association between construction inspection grades and critical defects using data mining and fuzzy logic

    Get PDF
    This paper explores the relations between defect types and quality inspection grades of public construction projects in Taiwan. Altogether, 499 defect types (classified from 17,648 defects) were found after analyzing 990 construction projects from the Public Construction Management Information System of the public construction commission which is a government unit that administers all the public construction. The core of this research includes the following steps. (1) Data mining (DM) was used to derive 57 association rules which altogether contain 30 of the 499 defect types. (2) K-means clustering was used to regroup the 990 projects of two attributes (defect frequency and original grading score of each project) into four new quality classes, so the 990 projects can be more evenly distributed in the four new classes and the correctness and reliability of the following analyses can be ensured. (3) Finally analysis of variance (ANOVA), fuzzy logic, and correlation analysis were used to verify that the aforementioned 30 defect types are the important ones determining inspection grades. Results of this research can help stakeholders of construction projects paying more attention on the root causes of the critical defect types so to dramatically raise their management effectiveness

    Named Entity Recognition and Text Compression

    Get PDF
    Import 13/01/2017In recent years, social networks have become very popular. It is easy for users to share their data using online social networks. Since data on social networks is idiomatic, irregular, brief, and includes acronyms and spelling errors, dealing with such data is more challenging than that of news or formal texts. With the huge volume of posts each day, effective extraction and processing of these data will bring great benefit to information extraction applications. This thesis proposes a method to normalize Vietnamese informal text in social networks. This method has the ability to identify and normalize informal text based on the structure of Vietnamese words, Vietnamese syllable rules, and a trigram model. After normalization, the data will be processed by a named entity recognition (NER) model to identify and classify the named entities in these data. In our NER model, we use six different types of features to recognize named entities categorized in three predefined classes: Person (PER), Location (LOC), and Organization (ORG). When viewing social network data, we found that the size of these data are very large and increase daily. This raises the challenge of how to decrease this size. Due to the size of the data to be normalized, we use a trigram dictionary that is quite big, therefore we also need to decrease its size. To deal with this challenge, in this thesis, we propose three methods to compress text files, especially in Vietnamese text. The first method is a syllable-based method relying on the structure of Vietnamese morphosyllables, consonants, syllables and vowels. The second method is trigram-based Vietnamese text compression based on a trigram dictionary. The last method is based on an n-gram slide window, in which we use five dictionaries for unigrams, bigrams, trigrams, four-grams and five-grams. This method achieves a promising compression ratio of around 90% and can be used for any size of text file.In recent years, social networks have become very popular. It is easy for users to share their data using online social networks. Since data on social networks is idiomatic, irregular, brief, and includes acronyms and spelling errors, dealing with such data is more challenging than that of news or formal texts. With the huge volume of posts each day, effective extraction and processing of these data will bring great benefit to information extraction applications. This thesis proposes a method to normalize Vietnamese informal text in social networks. This method has the ability to identify and normalize informal text based on the structure of Vietnamese words, Vietnamese syllable rules, and a trigram model. After normalization, the data will be processed by a named entity recognition (NER) model to identify and classify the named entities in these data. In our NER model, we use six different types of features to recognize named entities categorized in three predefined classes: Person (PER), Location (LOC), and Organization (ORG). When viewing social network data, we found that the size of these data are very large and increase daily. This raises the challenge of how to decrease this size. Due to the size of the data to be normalized, we use a trigram dictionary that is quite big, therefore we also need to decrease its size. To deal with this challenge, in this thesis, we propose three methods to compress text files, especially in Vietnamese text. The first method is a syllable-based method relying on the structure of Vietnamese morphosyllables, consonants, syllables and vowels. The second method is trigram-based Vietnamese text compression based on a trigram dictionary. The last method is based on an n-gram slide window, in which we use five dictionaries for unigrams, bigrams, trigrams, four-grams and five-grams. This method achieves a promising compression ratio of around 90% and can be used for any size of text file.460 - Katedra informatikyvyhovÄ›

    To Develop a Database Management Tool for Multi-Agent Simulation Platform

    Get PDF
    Depuis peu, la Modélisation et Simulation par Agents (ABMs) est passée d'une approche dirigée par les modèles à une approche dirigée par les données (Data Driven Approach, DDA). Cette tendance vers l’utilisation des données dans la simulation vise à appliquer les données collectées par les systèmes d’observation à la simulation (Edmonds and Moss, 2005; Hassan, 2009). Dans la DDA, les données empiriques collectées sur les systèmes cibles sont utilisées non seulement pour la simulation des modèles mais aussi pour l’initialisation, la calibration et l’évaluation des résultats issus des modèles de simulation, par exemple, le système d’estimation et de gestion des ressources hydrauliques du bassin Adour-Garonne Français (Gaudou et al., 2013) et l’invasion des rizières du delta du Mékong au Vietnam par les cicadelles brunes (Nguyen et al., 2012d). Cette évolution pose la question du « comment gérer les données empiriques et celles simulées dans de tels systèmes ». Le constat que l’on peut faire est que, si la conception et la simulation actuelles des modèles ont bénéficié des avancées informatiques à travers l’utilisation des plateformes populaires telles que Netlogo (Wilensky, 1999) ou GAMA (Taillandier et al., 2012), ce n'est pas encore le cas de la gestion des données, qui sont encore très souvent gérées de manière ad-hoc. Cette gestion des données dans des Modèles Basés Agents (ABM) est une des limitations actuelles des plateformes de simulation multiagents (SMA). Autrement dit, un tel outil de gestion des données est actuellement requis dans la construction des systèmes de simulation par agents et la gestion des bases de données correspondantes est aussi un problème important de ces systèmes. Dans cette thèse, je propose tout d’abord une structure logique pour la gestion des données dans des plateformes de SMA. La structure proposée qui intègre des solutions de l’Informatique Décisionnelle et des plateformes multi-agents s’appelle CFBM (Combination Framework of Business intelligence and Multi-agent based platform), elle a plusieurs objectifs : (1) modéliser et exécuter des SMAs, (2) gérer les données en entrée et en sortie des simulations, (3) intégrer les données de différentes sources, et (4) analyser les données à grande échelle. Ensuite, le besoin de la gestion des données dans les simulations agents est satisfait par une implémentation de CFBM dans la plateforme GAMA. Cette implémentation présente aussi une architecture logicielle pour combiner entrepôts deIv données et technologies du traitement analytique en ligne (OLAP) dans les systèmes SMAs. Enfin, CFBM est évaluée pour la gestion de données dans la plateforme GAMA à travers le développement de modèles de surveillance des cicadelles brunes (BSMs), où CFBM est utilisé non seulement pour gérer et intégrer les données empiriques collectées depuis le système cible et les résultats de simulation du modèle simulé, mais aussi calibrer et valider ce modèle. L'intérêt de CFBM réside non seulement dans l'amélioration des faiblesses des plateformes de simulation et de modélisation par agents concernant la gestion des données mais permet également de développer des systèmes de simulation complexes portant sur de nombreuses données en entrée et en sortie en utilisant l’approche dirigée par les données.Recently, there has been a shift from modeling driven approach to data driven approach inAgent Based Modeling and Simulation (ABMS). This trend towards the use of data-driven approaches in simulation aims at using more and more data available from the observation systems into simulation models (Edmonds and Moss, 2005; Hassan, 2009). In a data driven approach, the empirical data collected from the target system are used not only for the design of the simulation models but also in initialization, calibration and evaluation of the output of the simulation platform such as e.g., the water resource management and assessment system of the French Adour-Garonne Basin (Gaudou et al., 2013) and the invasion of Brown Plant Hopper on the rice fields of Mekong River Delta region in Vietnam (Nguyen et al., 2012d). That raises the question how to manage empirical data and simulation data in such agentbased simulation platform. The basic observation we can make is that currently, if the design and simulation of models have benefited from advances in computer science through the popularized use of simulation platforms like Netlogo (Wilensky, 1999) or GAMA (Taillandier et al., 2012), this is not yet the case for the management of data, which are still often managed in an ad hoc manner. Data management in ABM is one of limitations of agent-based simulation platforms. Put it other words, such a database management is also an important issue in agent-based simulation systems. In this thesis, I first propose a logical framework for data management in multi-agent based simulation platforms. The proposed framework is based on the combination of Business Intelligence solution and a multi-agent based platform called CFBM (Combination Framework of Business intelligence and Multi-agent based platform), and it serves several purposes: (1) model and execute multi-agent simulations, (2) manage input and output data of simulations, (3) integrate data from different sources; and (4) analyze high volume of data. Secondly, I fulfill the need for data management in ABM by the implementation of CFBM in the GAMA platform. This implementation of CFBM in GAMA also demonstrates a software architecture to combine Data Warehouse (DWH) and Online Analytical Processing (OLAP) technologies into a multi-agent based simulation system. Finally, I evaluate the CFBM for data management in the GAMA platform via the development of a Brown Plant Hopper Surveillance Models (BSMs), where CFBM is used ii not only to manage and integrate the whole empirical data collected from the target system and the data produced by the simulation model, but also to calibrate and validate the models.The successful development of the CFBM consists not only in remedying the limitation of agent-based modeling and simulation with regard to data management but also in dealing with the development of complex simulation systems with large amount of input and output data supporting a data driven approach

    To Develop a Database Management Tool for Multi-Agent Simulation Platform

    Get PDF
    Depuis peu, la Modélisation et Simulation par Agents (ABMs) est passée d'une approche dirigée par les modèles à une approche dirigée par les données (Data Driven Approach, DDA). Cette tendance vers l’utilisation des données dans la simulation vise à appliquer les données collectées par les systèmes d’observation à la simulation (Edmonds and Moss, 2005; Hassan, 2009). Dans la DDA, les données empiriques collectées sur les systèmes cibles sont utilisées non seulement pour la simulation des modèles mais aussi pour l’initialisation, la calibration et l’évaluation des résultats issus des modèles de simulation, par exemple, le système d’estimation et de gestion des ressources hydrauliques du bassin Adour-Garonne Français (Gaudou et al., 2013) et l’invasion des rizières du delta du Mékong au Vietnam par les cicadelles brunes (Nguyen et al., 2012d). Cette évolution pose la question du « comment gérer les données empiriques et celles simulées dans de tels systèmes ». Le constat que l’on peut faire est que, si la conception et la simulation actuelles des modèles ont bénéficié des avancées informatiques à travers l’utilisation des plateformes populaires telles que Netlogo (Wilensky, 1999) ou GAMA (Taillandier et al., 2012), ce n'est pas encore le cas de la gestion des données, qui sont encore très souvent gérées de manière ad-hoc. Cette gestion des données dans des Modèles Basés Agents (ABM) est une des limitations actuelles des plateformes de simulation multiagents (SMA). Autrement dit, un tel outil de gestion des données est actuellement requis dans la construction des systèmes de simulation par agents et la gestion des bases de données correspondantes est aussi un problème important de ces systèmes. Dans cette thèse, je propose tout d’abord une structure logique pour la gestion des données dans des plateformes de SMA. La structure proposée qui intègre des solutions de l’Informatique Décisionnelle et des plateformes multi-agents s’appelle CFBM (Combination Framework of Business intelligence and Multi-agent based platform), elle a plusieurs objectifs : (1) modéliser et exécuter des SMAs, (2) gérer les données en entrée et en sortie des simulations, (3) intégrer les données de différentes sources, et (4) analyser les données à grande échelle. Ensuite, le besoin de la gestion des données dans les simulations agents est satisfait par une implémentation de CFBM dans la plateforme GAMA. Cette implémentation présente aussi une architecture logicielle pour combiner entrepôts deIv données et technologies du traitement analytique en ligne (OLAP) dans les systèmes SMAs. Enfin, CFBM est évaluée pour la gestion de données dans la plateforme GAMA à travers le développement de modèles de surveillance des cicadelles brunes (BSMs), où CFBM est utilisé non seulement pour gérer et intégrer les données empiriques collectées depuis le système cible et les résultats de simulation du modèle simulé, mais aussi calibrer et valider ce modèle. L'intérêt de CFBM réside non seulement dans l'amélioration des faiblesses des plateformes de simulation et de modélisation par agents concernant la gestion des données mais permet également de développer des systèmes de simulation complexes portant sur de nombreuses données en entrée et en sortie en utilisant l’approche dirigée par les données.Recently, there has been a shift from modeling driven approach to data driven approach inAgent Based Modeling and Simulation (ABMS). This trend towards the use of data-driven approaches in simulation aims at using more and more data available from the observation systems into simulation models (Edmonds and Moss, 2005; Hassan, 2009). In a data driven approach, the empirical data collected from the target system are used not only for the design of the simulation models but also in initialization, calibration and evaluation of the output of the simulation platform such as e.g., the water resource management and assessment system of the French Adour-Garonne Basin (Gaudou et al., 2013) and the invasion of Brown Plant Hopper on the rice fields of Mekong River Delta region in Vietnam (Nguyen et al., 2012d). That raises the question how to manage empirical data and simulation data in such agentbased simulation platform. The basic observation we can make is that currently, if the design and simulation of models have benefited from advances in computer science through the popularized use of simulation platforms like Netlogo (Wilensky, 1999) or GAMA (Taillandier et al., 2012), this is not yet the case for the management of data, which are still often managed in an ad hoc manner. Data management in ABM is one of limitations of agent-based simulation platforms. Put it other words, such a database management is also an important issue in agent-based simulation systems. In this thesis, I first propose a logical framework for data management in multi-agent based simulation platforms. The proposed framework is based on the combination of Business Intelligence solution and a multi-agent based platform called CFBM (Combination Framework of Business intelligence and Multi-agent based platform), and it serves several purposes: (1) model and execute multi-agent simulations, (2) manage input and output data of simulations, (3) integrate data from different sources; and (4) analyze high volume of data. Secondly, I fulfill the need for data management in ABM by the implementation of CFBM in the GAMA platform. This implementation of CFBM in GAMA also demonstrates a software architecture to combine Data Warehouse (DWH) and Online Analytical Processing (OLAP) technologies into a multi-agent based simulation system. Finally, I evaluate the CFBM for data management in the GAMA platform via the development of a Brown Plant Hopper Surveillance Models (BSMs), where CFBM is used ii not only to manage and integrate the whole empirical data collected from the target system and the data produced by the simulation model, but also to calibrate and validate the models.The successful development of the CFBM consists not only in remedying the limitation of agent-based modeling and simulation with regard to data management but also in dealing with the development of complex simulation systems with large amount of input and output data supporting a data driven approach

    Analysis of Android Device-Based Solutions for Fall Detection

    Get PDF
    Falls are a major cause of health and psychological problems as well as hospitalization costs among older adults. Thus, the investigation on automatic Fall Detection Systems (FDSs) has received special attention from the research community during the last decade. In this area, the widespread popularity, decreasing price, computing capabilities, built-in sensors and multiplicity of wireless interfaces of Android-based devices (especially smartphones) have fostered the adoption of this technology to deploy wearable and inexpensive architectures for fall detection. This paper presents a critical and thorough analysis of those existing fall detection systems that are based on Android devices. The review systematically classifies and compares the proposals of the literature taking into account different criteria such as the system architecture, the employed sensors, the detection algorithm or the response in case of a fall alarms. The study emphasizes the analysis of the evaluation methods that are employed to assess the effectiveness of the detection process. The review reveals the complete lack of a reference framework to validate and compare the proposals. In addition, the study also shows that most research works do not evaluate the actual applicability of the Android devices (with limited battery and computing resources) to fall detection solutions.Ministerio de Economía y Competitividad TEC2013-42711-

    Comparison and Characterization of Android-Based Fall Detection Systems

    Get PDF
    Falls are a foremost source of injuries and hospitalization for seniors. The adoption of automatic fall detection mechanisms can noticeably reduce the response time of the medical staff or caregivers when a fall takes place. Smartphones are being increasingly proposed as wearable, cost-effective and not-intrusive systems for fall detection. The exploitation of smartphones’ potential (and in particular, the Android Operating System) can benefit from the wide implantation, the growing computational capabilities and the diversity of communication interfaces and embedded sensors of these personal devices. After revising the state-of-the-art on this matter, this study develops an experimental testbed to assess the performance of different fall detection algorithms that ground their decisions on the analysis of the inertial data registered by the accelerometer of the smartphone. Results obtained in a real testbed with diverse individuals indicate that the accuracy of the accelerometry-based techniques to identify the falls depends strongly on the fall pattern. The performed tests also show the difficulty to set detection acceleration thresholds that allow achieving a good trade-off between false negatives (falls that remain unnoticed) and false positives (conventional movements that are erroneously classified as falls). In any case, the study of the evolution of the battery drain reveals that the extra power consumption introduced by the Android monitoring applications cannot be neglected when evaluating the autonomy and even the viability of fall detection systems.Ministerio de Economía y Competitividad TEC2009-13763-C02-0
    corecore